2026-03-09T18:21:50.987 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T18:21:50.991 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:21:51.014 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604 branch: squid description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} email: null first_in_suite: false flavor: default job_id: '604' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - but it is still running - overall HEALTH_ - \(OSDMAP_FLAGS\) - \(PG_ - \(OSD_ - \(OBJECT_ - \(POOL_APP_NOT_ENABLED\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: extra_system_packages: deb: - python3-pytest rpm: - python3-pytest flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHNrwU4qqoJYrdDHj9Ya4KKLLfg3DmzKdLeb65v53bYJwuzp+p8u5yVpyZX6hjq2RL9MoCHhdwAJm03XxqVYugg= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIRUIzdwo5pAt8Mixb6aS46vlHPtAnxMGdZt3KAk//sABxrhhk4bbaIyJIJgffHbr1KLHNv8IgLe89AD5t9mXOw= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test_python.sh timeout: 1h teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T18:21:51.015 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T18:21:51.015 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T18:21:51.015 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T18:21:51.016 INFO:teuthology.task.internal:Checking packages... 2026-03-09T18:21:51.016 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T18:21:51.016 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T18:21:51.016 INFO:teuthology.packaging:ref: None 2026-03-09T18:21:51.016 INFO:teuthology.packaging:tag: None 2026-03-09T18:21:51.016 INFO:teuthology.packaging:branch: squid 2026-03-09T18:21:51.016 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:21:51.016 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T18:21:51.792 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T18:21:51.793 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T18:21:51.794 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T18:21:51.794 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T18:21:51.794 INFO:teuthology.task.internal:Saving configuration 2026-03-09T18:21:51.799 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T18:21:51.800 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T18:21:51.806 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:20:19.179568', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHNrwU4qqoJYrdDHj9Ya4KKLLfg3DmzKdLeb65v53bYJwuzp+p8u5yVpyZX6hjq2RL9MoCHhdwAJm03XxqVYugg='} 2026-03-09T18:21:51.812 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:20:19.179997', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIRUIzdwo5pAt8Mixb6aS46vlHPtAnxMGdZt3KAk//sABxrhhk4bbaIyJIJgffHbr1KLHNv8IgLe89AD5t9mXOw='} 2026-03-09T18:21:51.812 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T18:21:51.812 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T18:21:51.813 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T18:21:51.813 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T18:21:51.818 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-09T18:21:51.824 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-09T18:21:51.824 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7ff334b72170>, signals=[15]) 2026-03-09T18:21:51.824 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T18:21:51.825 INFO:teuthology.task.internal:Opening connections... 2026-03-09T18:21:51.825 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-09T18:21:51.825 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:21:51.888 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-09T18:21:51.888 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:21:51.947 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T18:21:51.948 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-09T18:21:52.015 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-09T18:21:52.016 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-09T18:21:52.074 INFO:teuthology.orchestra.run.vm04.stdout:NAME="CentOS Stream" 2026-03-09T18:21:52.074 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="9" 2026-03-09T18:21:52.074 INFO:teuthology.orchestra.run.vm04.stdout:ID="centos" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE="rhel fedora" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="9" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:PLATFORM_ID="platform:el9" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:ANSI_COLOR="0;31" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:LOGO="fedora-logo-icon" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://centos.org/" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T18:21:52.075 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T18:21:52.075 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-09T18:21:52.080 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-09T18:21:52.097 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-09T18:21:52.097 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T18:21:52.156 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T18:21:52.156 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-09T18:21:52.159 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T18:21:52.161 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T18:21:52.162 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T18:21:52.162 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:21:52.164 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:21:52.213 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T18:21:52.214 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T18:21:52.214 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:21:52.221 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:21:52.235 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:21:52.271 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:21:52.272 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T18:21:52.281 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-09T18:21:52.300 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:21:52.545 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-09T18:21:52.561 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:21:52.791 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T18:21:52.792 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T18:21:52.792 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:21:52.794 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:21:52.813 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T18:21:52.815 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T18:21:52.816 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T18:21:52.816 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:21:52.855 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:21:52.877 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T18:21:52.878 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T18:21:52.878 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:21:52.929 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:21:52.929 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:21:52.947 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:21:52.947 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:21:52.972 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:21:52.995 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:21:53.004 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:21:53.019 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:21:53.031 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:21:53.032 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T18:21:53.034 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T18:21:53.034 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:21:53.048 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:21:53.097 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T18:21:53.100 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T18:21:53.100 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:21:53.115 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:21:53.153 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:21:53.198 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:21:53.258 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:21:53.258 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:21:53.320 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:21:53.347 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:21:53.403 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:21:53.403 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:21:53.461 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-09T18:21:53.464 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-09T18:21:53.494 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T18:21:53.530 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T18:21:53.893 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T18:21:53.895 INFO:teuthology.task.internal:Starting timer... 2026-03-09T18:21:53.895 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T18:21:53.897 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T18:21:53.899 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-09T18:21:53.900 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-09T18:21:53.900 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-09T18:21:53.900 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T18:21:53.900 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T18:21:53.900 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T18:21:53.900 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T18:21:53.901 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T18:21:53.901 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T18:21:53.903 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T18:21:54.493 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T18:21:54.499 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T18:21:54.499 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory4dcvb3u0 --limit vm04.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T18:23:57.436 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm09.local')] 2026-03-09T18:23:57.436 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-09T18:23:57.436 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:23:57.507 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-09T18:23:57.584 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-09T18:23:57.584 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-09T18:23:57.585 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:23:57.652 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-09T18:23:57.732 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-09T18:23:57.732 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T18:23:57.734 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T18:23:57.735 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:23:57.735 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:23:57.736 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:23:57.737 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:23:57.777 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T18:23:57.789 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T18:23:57.812 INFO:teuthology.orchestra.run.vm04.stderr:sudo: ntpd: command not found 2026-03-09T18:23:57.814 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T18:23:57.825 INFO:teuthology.orchestra.run.vm04.stdout:506 Cannot talk to daemon 2026-03-09T18:23:57.835 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T18:23:57.841 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T18:23:57.858 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T18:23:57.874 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-09T18:23:57.891 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-09T18:23:57.913 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T18:23:57.916 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T18:23:57.937 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T18:23:57.995 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T18:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T18:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T18:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout:^? 172-104-138-148.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout:^? static.222.16.42.77.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.996 INFO:teuthology.orchestra.run.vm04.stdout:^? stratum2-1.NTP.TechFak.N> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.996 INFO:teuthology.orchestra.run.vm04.stdout:^? cloudrouter.1in1.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:^? cloudrouter.1in1.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:^? 172-104-138-148.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:^? static.222.16.42.77.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.998 INFO:teuthology.orchestra.run.vm09.stdout:^? stratum2-1.NTP.TechFak.N> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T18:23:57.998 INFO:teuthology.run_tasks:Running task pexec... 2026-03-09T18:23:58.001 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-09T18:23:58.001 DEBUG:teuthology.orchestra.run.vm04:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T18:23:58.001 DEBUG:teuthology.orchestra.run.vm09:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T18:23:58.040 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf remove nvme-cli -y 2026-03-09T18:23:58.040 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-09T18:23:58.040 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm04.local 2026-03-09T18:23:58.040 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T18:23:58.040 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-09T18:23:58.042 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf remove nvme-cli -y 2026-03-09T18:23:58.042 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-09T18:23:58.042 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm09.local 2026-03-09T18:23:58.042 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T18:23:58.042 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-09T18:23:58.291 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: nvme-cli 2026-03-09T18:23:58.292 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:23:58.295 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:23:58.296 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:23:58.296 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:23:58.312 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: nvme-cli 2026-03-09T18:23:58.312 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:23:58.315 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:23:58.316 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:23:58.316 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:23:58.786 INFO:teuthology.orchestra.run.vm09.stdout:Last metadata expiration check: 0:01:18 ago on Mon 09 Mar 2026 06:22:40 PM UTC. 2026-03-09T18:23:58.796 INFO:teuthology.orchestra.run.vm04.stdout:Last metadata expiration check: 0:01:16 ago on Mon 09 Mar 2026 06:22:42 PM UTC. 2026-03-09T18:23:58.909 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:23:58.909 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:23:58.909 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Install 6 Packages 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 2.3 M 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Installed size: 11 M 2026-03-09T18:23:58.910 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: Package Architecture Version Repository Size 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T18:23:58.924 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout:Install 6 Packages 2026-03-09T18:23:58.925 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:23:58.927 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 2.3 M 2026-03-09T18:23:58.927 INFO:teuthology.orchestra.run.vm04.stdout:Installed size: 11 M 2026-03-09T18:23:58.927 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T18:23:59.155 INFO:teuthology.orchestra.run.vm04.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 352 kB/s | 44 kB 00:00 2026-03-09T18:23:59.157 INFO:teuthology.orchestra.run.vm09.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 343 kB/s | 44 kB 00:00 2026-03-09T18:23:59.166 INFO:teuthology.orchestra.run.vm09.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 525 kB/s | 72 kB 00:00 2026-03-09T18:23:59.186 INFO:teuthology.orchestra.run.vm04.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 462 kB/s | 72 kB 00:00 2026-03-09T18:23:59.217 INFO:teuthology.orchestra.run.vm09.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 1.4 MB/s | 84 kB 00:00 2026-03-09T18:23:59.233 INFO:teuthology.orchestra.run.vm09.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.2 MB/s | 150 kB 00:00 2026-03-09T18:23:59.248 INFO:teuthology.orchestra.run.vm04.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 907 kB/s | 84 kB 00:00 2026-03-09T18:23:59.270 INFO:teuthology.orchestra.run.vm09.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 4.8 MB/s | 1.2 MB 00:00 2026-03-09T18:23:59.309 INFO:teuthology.orchestra.run.vm09.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 8.9 MB/s | 837 kB 00:00 2026-03-09T18:23:59.309 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:23:59.309 INFO:teuthology.orchestra.run.vm09.stdout:Total 5.8 MB/s | 2.3 MB 00:00 2026-03-09T18:23:59.311 INFO:teuthology.orchestra.run.vm04.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.2 MB/s | 150 kB 00:00 2026-03-09T18:23:59.373 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:23:59.381 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:23:59.381 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:23:59.445 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:23:59.445 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:23:59.584 INFO:teuthology.orchestra.run.vm04.stdout:(5/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 2.4 MB/s | 837 kB 00:00 2026-03-09T18:23:59.621 INFO:teuthology.orchestra.run.vm04.stdout:(6/6): nvme-cli-2.16-1.el9.x86_64.rpm 2.0 MB/s | 1.2 MB 00:00 2026-03-09T18:23:59.621 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:23:59.621 INFO:teuthology.orchestra.run.vm04.stdout:Total 3.3 MB/s | 2.3 MB 00:00 2026-03-09T18:23:59.641 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:23:59.652 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-09T18:23:59.667 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-09T18:23:59.678 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T18:23:59.688 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T18:23:59.690 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T18:23:59.702 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:23:59.711 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:23:59.711 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:23:59.774 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:23:59.774 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:23:59.881 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T18:23:59.889 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T18:23:59.955 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:23:59.967 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-09T18:23:59.977 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-09T18:23:59.985 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T18:23:59.992 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T18:23:59.994 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T18:24:00.205 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T18:24:00.210 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T18:24:00.326 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T18:24:00.326 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T18:24:00.326 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:00.657 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T18:24:00.657 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T18:24:00.657 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:24:01.006 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-09T18:24:01.006 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-09T18:24:01.006 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T18:24:01.006 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T18:24:01.006 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-09T18:24:01.123 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:01.124 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:24:01.258 DEBUG:teuthology.parallel:result is None 2026-03-09T18:24:01.261 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-09T18:24:01.261 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-09T18:24:01.261 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T18:24:01.261 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T18:24:01.261 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:24:01.368 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:24:01.445 DEBUG:teuthology.parallel:result is None 2026-03-09T18:24:01.445 INFO:teuthology.run_tasks:Running task install... 2026-03-09T18:24:01.447 DEBUG:teuthology.task.install:project ceph 2026-03-09T18:24:01.447 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'extra_system_packages': {'deb': ['python3-pytest'], 'rpm': ['python3-pytest']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T18:24:01.447 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T18:24:01.448 INFO:teuthology.task.install:Using flavor: default 2026-03-09T18:24:01.450 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T18:24:01.450 INFO:teuthology.task.install:extra packages: [] 2026-03-09T18:24:01.450 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T18:24:01.450 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:24:01.451 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T18:24:01.451 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:24:02.130 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T18:24:02.130 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T18:24:02.193 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T18:24:02.193 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T18:24:02.676 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T18:24:02.676 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:24:02.676 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T18:24:02.713 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T18:24:02.713 DEBUG:teuthology.orchestra.run.vm04:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T18:24:02.753 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T18:24:02.753 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:24:02.753 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T18:24:02.794 DEBUG:teuthology.orchestra.run.vm04:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T18:24:02.797 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T18:24:02.798 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T18:24:02.877 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T18:24:02.887 DEBUG:teuthology.orchestra.run.vm04:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T18:24:02.919 INFO:teuthology.orchestra.run.vm04.stdout:check_obsoletes = 1 2026-03-09T18:24:02.921 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T18:24:02.971 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T18:24:03.005 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-09T18:24:03.007 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T18:24:03.164 INFO:teuthology.orchestra.run.vm04.stdout:41 files removed 2026-03-09T18:24:03.199 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T18:24:03.245 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-09T18:24:03.280 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T18:24:04.613 INFO:teuthology.orchestra.run.vm04.stdout:ceph packages for x86_64 70 kB/s | 84 kB 00:01 2026-03-09T18:24:04.679 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 72 kB/s | 84 kB 00:01 2026-03-09T18:24:05.588 INFO:teuthology.orchestra.run.vm04.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T18:24:05.666 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T18:24:06.544 INFO:teuthology.orchestra.run.vm04.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T18:24:06.664 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 1.9 kB/s | 1.9 kB 00:00 2026-03-09T18:24:09.553 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 3.1 MB/s | 8.9 MB 00:02 2026-03-09T18:24:11.739 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - BaseOS 1.7 MB/s | 8.9 MB 00:05 2026-03-09T18:24:14.292 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 6.7 MB/s | 27 MB 00:04 2026-03-09T18:24:24.452 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 1.1 MB/s | 8.0 MB 00:07 2026-03-09T18:24:25.752 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - AppStream 2.0 MB/s | 27 MB 00:13 2026-03-09T18:24:26.100 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 29 kB/s | 20 kB 00:00 2026-03-09T18:24:26.577 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 51 MB/s | 20 MB 00:00 2026-03-09T18:24:29.843 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - CRB 6.8 MB/s | 8.0 MB 00:01 2026-03-09T18:24:31.375 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-09T18:24:31.527 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - Extras packages 27 kB/s | 20 kB 00:00 2026-03-09T18:24:31.994 INFO:teuthology.orchestra.run.vm04.stdout:Extra Packages for Enterprise Linux 52 MB/s | 20 MB 00:00 2026-03-09T18:24:32.821 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T18:24:32.821 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T18:24:32.826 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T18:24:32.827 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T18:24:32.858 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T18:24:32.863 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T18:24:32.864 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Install 138 Packages 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 211 M 2026-03-09T18:24:32.865 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-09T18:24:35.001 INFO:teuthology.orchestra.run.vm09.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-09T18:24:35.927 INFO:teuthology.orchestra.run.vm09.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.2 MB/s | 1.2 MB 00:00 2026-03-09T18:24:36.046 INFO:teuthology.orchestra.run.vm09.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-09T18:24:36.619 INFO:teuthology.orchestra.run.vm04.stdout:lab-extras 62 kB/s | 50 kB 00:00 2026-03-09T18:24:36.994 INFO:teuthology.orchestra.run.vm09.stdout:(4/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.6 MB/s | 2.4 MB 00:00 2026-03-09T18:24:37.325 INFO:teuthology.orchestra.run.vm09.stdout:(5/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 1.9 MB/s | 5.5 MB 00:02 2026-03-09T18:24:37.346 INFO:teuthology.orchestra.run.vm09.stdout:(6/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 3.1 MB/s | 1.1 MB 00:00 2026-03-09T18:24:38.057 INFO:teuthology.orchestra.run.vm04.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T18:24:38.057 INFO:teuthology.orchestra.run.vm04.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T18:24:38.062 INFO:teuthology.orchestra.run.vm04.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T18:24:38.062 INFO:teuthology.orchestra.run.vm04.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T18:24:38.092 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:24:38.097 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout:Upgrading: 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T18:24:38.103 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T18:24:38.104 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T18:24:38.105 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Installing weak dependencies: 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Install 138 Packages 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Upgrade 2 Packages 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 211 M 2026-03-09T18:24:38.106 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T18:24:38.411 INFO:teuthology.orchestra.run.vm09.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.4 MB/s | 4.7 MB 00:01 2026-03-09T18:24:39.417 INFO:teuthology.orchestra.run.vm04.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-09T18:24:39.610 INFO:teuthology.orchestra.run.vm09.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 4.3 MB/s | 22 MB 00:05 2026-03-09T18:24:39.728 INFO:teuthology.orchestra.run.vm09.stdout:(9/140): ceph-selinux-19.2.3-678.ge911bdeb.el9. 215 kB/s | 25 kB 00:00 2026-03-09T18:24:39.889 INFO:teuthology.orchestra.run.vm09.stdout:(10/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9 7.3 MB/s | 11 MB 00:01 2026-03-09T18:24:39.941 INFO:teuthology.orchestra.run.vm09.stdout:(11/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 6.6 MB/s | 17 MB 00:02 2026-03-09T18:24:40.009 INFO:teuthology.orchestra.run.vm09.stdout:(12/140): libcephfs-devel-19.2.3-678.ge911bdeb. 280 kB/s | 34 kB 00:00 2026-03-09T18:24:40.069 INFO:teuthology.orchestra.run.vm09.stdout:(13/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.7 MB/s | 1.0 MB 00:00 2026-03-09T18:24:40.130 INFO:teuthology.orchestra.run.vm09.stdout:(14/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-09T18:24:40.187 INFO:teuthology.orchestra.run.vm09.stdout:(15/140): librados-devel-19.2.3-678.ge911bdeb.e 1.1 MB/s | 127 kB 00:00 2026-03-09T18:24:40.256 INFO:teuthology.orchestra.run.vm09.stdout:(16/140): libradosstriper1-19.2.3-678.ge911bdeb 3.9 MB/s | 503 kB 00:00 2026-03-09T18:24:40.376 INFO:teuthology.orchestra.run.vm09.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 375 kB/s | 45 kB 00:00 2026-03-09T18:24:40.471 INFO:teuthology.orchestra.run.vm04.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.1 MB/s | 1.2 MB 00:01 2026-03-09T18:24:40.497 INFO:teuthology.orchestra.run.vm09.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-09T18:24:40.591 INFO:teuthology.orchestra.run.vm04.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-09T18:24:40.619 INFO:teuthology.orchestra.run.vm09.stdout:(19/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-09T18:24:40.682 INFO:teuthology.orchestra.run.vm09.stdout:(20/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-09T18:24:40.743 INFO:teuthology.orchestra.run.vm09.stdout:(21/140): python3-rados-19.2.3-678.ge911bdeb.el 2.5 MB/s | 323 kB 00:00 2026-03-09T18:24:40.808 INFO:teuthology.orchestra.run.vm09.stdout:(22/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-09T18:24:40.864 INFO:teuthology.orchestra.run.vm09.stdout:(23/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 827 kB/s | 100 kB 00:00 2026-03-09T18:24:40.925 INFO:teuthology.orchestra.run.vm09.stdout:(24/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 728 kB/s | 85 kB 00:00 2026-03-09T18:24:41.044 INFO:teuthology.orchestra.run.vm09.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-09T18:24:41.160 INFO:teuthology.orchestra.run.vm09.stdout:(26/140): ceph-grafana-dashboards-19.2.3-678.ge 268 kB/s | 31 kB 00:00 2026-03-09T18:24:41.239 INFO:teuthology.orchestra.run.vm09.stdout:(27/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.3 MB/s | 3.1 MB 00:00 2026-03-09T18:24:41.279 INFO:teuthology.orchestra.run.vm09.stdout:(28/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-09T18:24:41.557 INFO:teuthology.orchestra.run.vm04.stdout:(4/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.5 MB/s | 2.4 MB 00:00 2026-03-09T18:24:41.618 INFO:teuthology.orchestra.run.vm09.stdout:(29/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 10 MB/s | 3.8 MB 00:00 2026-03-09T18:24:41.741 INFO:teuthology.orchestra.run.vm09.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-09T18:24:41.886 INFO:teuthology.orchestra.run.vm09.stdout:(31/140): ceph-mgr-diskprediction-local-19.2.3- 12 MB/s | 7.4 MB 00:00 2026-03-09T18:24:41.888 INFO:teuthology.orchestra.run.vm09.stdout:(32/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 336 kB/s | 49 kB 00:00 2026-03-09T18:24:41.913 INFO:teuthology.orchestra.run.vm04.stdout:(5/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 3.0 MB/s | 1.1 MB 00:00 2026-03-09T18:24:42.007 INFO:teuthology.orchestra.run.vm09.stdout:(33/140): ceph-prometheus-alerts-19.2.3-678.ge9 139 kB/s | 17 kB 00:00 2026-03-09T18:24:42.013 INFO:teuthology.orchestra.run.vm09.stdout:(34/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-09T18:24:42.143 INFO:teuthology.orchestra.run.vm09.stdout:(35/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.5 MB/s | 769 kB 00:00 2026-03-09T18:24:42.147 INFO:teuthology.orchestra.run.vm09.stdout:(36/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.6 MB/s | 351 kB 00:00 2026-03-09T18:24:42.153 INFO:teuthology.orchestra.run.vm04.stdout:(6/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 1.7 MB/s | 5.5 MB 00:03 2026-03-09T18:24:42.180 INFO:teuthology.orchestra.run.vm09.stdout:(37/140): libconfig-1.7.2-9.el9.x86_64.rpm 2.1 MB/s | 72 kB 00:00 2026-03-09T18:24:42.209 INFO:teuthology.orchestra.run.vm09.stdout:(38/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 617 kB/s | 40 kB 00:00 2026-03-09T18:24:42.276 INFO:teuthology.orchestra.run.vm09.stdout:(39/140): libquadmath-11.5.0-14.el9.x86_64.rpm 2.7 MB/s | 184 kB 00:00 2026-03-09T18:24:42.296 INFO:teuthology.orchestra.run.vm09.stdout:(40/140): mailcap-2.1.49-5.el9.noarch.rpm 1.7 MB/s | 33 kB 00:00 2026-03-09T18:24:42.306 INFO:teuthology.orchestra.run.vm09.stdout:(41/140): libgfortran-11.5.0-14.el9.x86_64.rpm 6.2 MB/s | 794 kB 00:00 2026-03-09T18:24:42.315 INFO:teuthology.orchestra.run.vm09.stdout:(42/140): pciutils-3.7.0-7.el9.x86_64.rpm 4.9 MB/s | 93 kB 00:00 2026-03-09T18:24:42.366 INFO:teuthology.orchestra.run.vm09.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.1 MB/s | 253 kB 00:00 2026-03-09T18:24:42.397 INFO:teuthology.orchestra.run.vm09.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 3.4 MB/s | 106 kB 00:00 2026-03-09T18:24:42.455 INFO:teuthology.orchestra.run.vm09.stdout:(45/140): python3-pycparser-2.20-6.el9.noarch.r 2.3 MB/s | 135 kB 00:00 2026-03-09T18:24:42.474 INFO:teuthology.orchestra.run.vm09.stdout:(46/140): python3-cryptography-36.0.1-5.el9.x86 7.9 MB/s | 1.2 MB 00:00 2026-03-09T18:24:42.490 INFO:teuthology.orchestra.run.vm09.stdout:(47/140): python3-requests-2.25.1-10.el9.noarch 3.5 MB/s | 126 kB 00:00 2026-03-09T18:24:42.540 INFO:teuthology.orchestra.run.vm09.stdout:(48/140): python3-urllib3-1.26.5-7.el9.noarch.r 3.3 MB/s | 218 kB 00:00 2026-03-09T18:24:42.591 INFO:teuthology.orchestra.run.vm09.stdout:(49/140): unzip-6.0-59.el9.x86_64.rpm 1.8 MB/s | 182 kB 00:00 2026-03-09T18:24:42.744 INFO:teuthology.orchestra.run.vm09.stdout:(50/140): zip-3.0-35.el9.x86_64.rpm 1.3 MB/s | 266 kB 00:00 2026-03-09T18:24:42.765 INFO:teuthology.orchestra.run.vm09.stdout:(51/140): boost-program-options-1.75.0-13.el9.x 599 kB/s | 104 kB 00:00 2026-03-09T18:24:42.842 INFO:teuthology.orchestra.run.vm09.stdout:(52/140): flexiblas-3.0.4-9.el9.x86_64.rpm 304 kB/s | 30 kB 00:00 2026-03-09T18:24:42.914 INFO:teuthology.orchestra.run.vm09.stdout:(53/140): flexiblas-openblas-openmp-3.0.4-9.el9 207 kB/s | 15 kB 00:00 2026-03-09T18:24:42.970 INFO:teuthology.orchestra.run.vm09.stdout:(54/140): libnbd-1.20.3-4.el9.x86_64.rpm 2.9 MB/s | 164 kB 00:00 2026-03-09T18:24:42.992 INFO:teuthology.orchestra.run.vm04.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.4 MB/s | 4.7 MB 00:01 2026-03-09T18:24:43.007 INFO:teuthology.orchestra.run.vm09.stdout:(55/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 4.3 MB/s | 160 kB 00:00 2026-03-09T18:24:43.037 INFO:teuthology.orchestra.run.vm09.stdout:(56/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.5 MB/s | 45 kB 00:00 2026-03-09T18:24:43.121 INFO:teuthology.orchestra.run.vm09.stdout:(57/140): librdkafka-1.6.1-102.el9.x86_64.rpm 7.7 MB/s | 662 kB 00:00 2026-03-09T18:24:43.154 INFO:teuthology.orchestra.run.vm09.stdout:(58/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 7.4 MB/s | 246 kB 00:00 2026-03-09T18:24:43.166 INFO:teuthology.orchestra.run.vm09.stdout:(59/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 7.5 MB/s | 3.0 MB 00:00 2026-03-09T18:24:43.198 INFO:teuthology.orchestra.run.vm09.stdout:(60/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 9.1 MB/s | 292 kB 00:00 2026-03-09T18:24:43.227 INFO:teuthology.orchestra.run.vm09.stdout:(61/140): lua-5.4.4-4.el9.x86_64.rpm 6.4 MB/s | 188 kB 00:00 2026-03-09T18:24:43.241 INFO:teuthology.orchestra.run.vm09.stdout:(62/140): libxslt-1.1.34-12.el9.x86_64.rpm 2.6 MB/s | 233 kB 00:00 2026-03-09T18:24:43.257 INFO:teuthology.orchestra.run.vm09.stdout:(63/140): openblas-0.3.29-1.el9.x86_64.rpm 1.4 MB/s | 42 kB 00:00 2026-03-09T18:24:43.416 INFO:teuthology.orchestra.run.vm09.stdout:(64/140): openblas-openmp-0.3.29-1.el9.x86_64.r 30 MB/s | 5.3 MB 00:00 2026-03-09T18:24:43.420 INFO:teuthology.orchestra.run.vm09.stdout:(65/140): protobuf-3.14.0-17.el9.x86_64.rpm 6.2 MB/s | 1.0 MB 00:00 2026-03-09T18:24:43.471 INFO:teuthology.orchestra.run.vm09.stdout:(66/140): python3-devel-3.9.25-3.el9.x86_64.rpm 4.7 MB/s | 244 kB 00:00 2026-03-09T18:24:43.509 INFO:teuthology.orchestra.run.vm09.stdout:(67/140): python3-iniconfig-1.1.1-7.el9.noarch. 467 kB/s | 17 kB 00:00 2026-03-09T18:24:43.536 INFO:teuthology.orchestra.run.vm09.stdout:(68/140): python3-babel-2.9.1-2.el9.noarch.rpm 50 MB/s | 6.0 MB 00:00 2026-03-09T18:24:43.541 INFO:teuthology.orchestra.run.vm09.stdout:(69/140): python3-jinja2-2.11.3-8.el9.noarch.rp 7.5 MB/s | 249 kB 00:00 2026-03-09T18:24:43.580 INFO:teuthology.orchestra.run.vm09.stdout:(70/140): python3-jmespath-1.0.1-1.el9.noarch.r 1.1 MB/s | 48 kB 00:00 2026-03-09T18:24:43.610 INFO:teuthology.orchestra.run.vm09.stdout:(71/140): python3-mako-1.1.4-6.el9.noarch.rpm 5.8 MB/s | 172 kB 00:00 2026-03-09T18:24:43.627 INFO:teuthology.orchestra.run.vm09.stdout:(72/140): python3-libstoragemgmt-1.10.1-1.el9.x 2.0 MB/s | 177 kB 00:00 2026-03-09T18:24:43.637 INFO:teuthology.orchestra.run.vm09.stdout:(73/140): python3-markupsafe-1.1.1-12.el9.x86_6 1.3 MB/s | 35 kB 00:00 2026-03-09T18:24:43.696 INFO:teuthology.orchestra.run.vm09.stdout:(74/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 7.3 MB/s | 442 kB 00:00 2026-03-09T18:24:43.724 INFO:teuthology.orchestra.run.vm09.stdout:(75/140): python3-packaging-20.9-5.el9.noarch.r 2.8 MB/s | 77 kB 00:00 2026-03-09T18:24:43.770 INFO:teuthology.orchestra.run.vm09.stdout:(76/140): python3-pluggy-0.13.1-7.el9.noarch.rp 917 kB/s | 41 kB 00:00 2026-03-09T18:24:43.853 INFO:teuthology.orchestra.run.vm09.stdout:(77/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 27 MB/s | 6.1 MB 00:00 2026-03-09T18:24:43.870 INFO:teuthology.orchestra.run.vm09.stdout:(78/140): python3-protobuf-3.14.0-17.el9.noarch 2.6 MB/s | 267 kB 00:00 2026-03-09T18:24:43.913 INFO:teuthology.orchestra.run.vm09.stdout:(79/140): python3-py-1.10.0-6.el9.noarch.rpm 7.8 MB/s | 477 kB 00:00 2026-03-09T18:24:43.930 INFO:teuthology.orchestra.run.vm09.stdout:(80/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.5 MB/s | 157 kB 00:00 2026-03-09T18:24:43.961 INFO:teuthology.orchestra.run.vm09.stdout:(81/140): python3-pyasn1-modules-0.4.8-7.el9.no 5.8 MB/s | 277 kB 00:00 2026-03-09T18:24:43.989 INFO:teuthology.orchestra.run.vm09.stdout:(82/140): python3-requests-oauthlib-1.3.0-12.el 1.9 MB/s | 54 kB 00:00 2026-03-09T18:24:43.993 INFO:teuthology.orchestra.run.vm09.stdout:(83/140): python3-pytest-6.2.2-7.el9.noarch.rpm 8.1 MB/s | 519 kB 00:00 2026-03-09T18:24:44.021 INFO:teuthology.orchestra.run.vm09.stdout:(84/140): python3-toml-0.10.2-6.el9.noarch.rpm 1.5 MB/s | 42 kB 00:00 2026-03-09T18:24:44.067 INFO:teuthology.orchestra.run.vm09.stdout:(85/140): qatlib-25.08.0-2.el9.x86_64.rpm 5.2 MB/s | 240 kB 00:00 2026-03-09T18:24:44.178 INFO:teuthology.orchestra.run.vm09.stdout:(86/140): qatlib-service-25.08.0-2.el9.x86_64.r 333 kB/s | 37 kB 00:00 2026-03-09T18:24:44.208 INFO:teuthology.orchestra.run.vm09.stdout:(87/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.3 MB/s | 66 kB 00:00 2026-03-09T18:24:44.243 INFO:teuthology.orchestra.run.vm09.stdout:(88/140): socat-1.7.4.1-8.el9.x86_64.rpm 8.6 MB/s | 303 kB 00:00 2026-03-09T18:24:44.270 INFO:teuthology.orchestra.run.vm09.stdout:(89/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.3 MB/s | 64 kB 00:00 2026-03-09T18:24:44.386 INFO:teuthology.orchestra.run.vm09.stdout:(90/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 49 MB/s | 19 MB 00:00 2026-03-09T18:24:44.418 INFO:teuthology.orchestra.run.vm09.stdout:(91/140): lua-devel-5.4.4-4.el9.x86_64.rpm 151 kB/s | 22 kB 00:00 2026-03-09T18:24:44.433 INFO:teuthology.orchestra.run.vm09.stdout:(92/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 39 MB/s | 551 kB 00:00 2026-03-09T18:24:44.439 INFO:teuthology.orchestra.run.vm09.stdout:(93/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 49 MB/s | 308 kB 00:00 2026-03-09T18:24:44.442 INFO:teuthology.orchestra.run.vm09.stdout:(94/140): grpc-data-1.46.7-10.el9.noarch.rpm 8.4 MB/s | 19 kB 00:00 2026-03-09T18:24:44.501 INFO:teuthology.orchestra.run.vm09.stdout:(95/140): libarrow-9.0.0-15.el9.x86_64.rpm 74 MB/s | 4.4 MB 00:00 2026-03-09T18:24:44.504 INFO:teuthology.orchestra.run.vm09.stdout:(96/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.1 MB/s | 25 kB 00:00 2026-03-09T18:24:44.507 INFO:teuthology.orchestra.run.vm09.stdout:(97/140): liboath-2.6.12-1.el9.x86_64.rpm 18 MB/s | 49 kB 00:00 2026-03-09T18:24:44.511 INFO:teuthology.orchestra.run.vm09.stdout:(98/140): libunwind-1.6.2-1.el9.x86_64.rpm 18 MB/s | 67 kB 00:00 2026-03-09T18:24:44.515 INFO:teuthology.orchestra.run.vm09.stdout:(99/140): luarocks-3.9.2-5.el9.noarch.rpm 35 MB/s | 151 kB 00:00 2026-03-09T18:24:44.530 INFO:teuthology.orchestra.run.vm09.stdout:(100/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 58 MB/s | 838 kB 00:00 2026-03-09T18:24:44.539 INFO:teuthology.orchestra.run.vm09.stdout:(101/140): python3-asyncssh-2.13.2-5.el9.noarch 61 MB/s | 548 kB 00:00 2026-03-09T18:24:44.541 INFO:teuthology.orchestra.run.vm09.stdout:(102/140): python3-autocommand-2.2.2-8.el9.noar 12 MB/s | 29 kB 00:00 2026-03-09T18:24:44.545 INFO:teuthology.orchestra.run.vm09.stdout:(103/140): python3-backports-tarfile-1.2.0-1.el 19 MB/s | 60 kB 00:00 2026-03-09T18:24:44.548 INFO:teuthology.orchestra.run.vm09.stdout:(104/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 16 MB/s | 43 kB 00:00 2026-03-09T18:24:44.550 INFO:teuthology.orchestra.run.vm09.stdout:(105/140): python3-cachetools-4.2.4-1.el9.noarc 12 MB/s | 32 kB 00:00 2026-03-09T18:24:44.553 INFO:teuthology.orchestra.run.vm09.stdout:(106/140): python3-certifi-2023.05.07-4.el9.noa 6.7 MB/s | 14 kB 00:00 2026-03-09T18:24:44.558 INFO:teuthology.orchestra.run.vm09.stdout:(107/140): python3-cheroot-10.0.1-4.el9.noarch. 34 MB/s | 173 kB 00:00 2026-03-09T18:24:44.564 INFO:teuthology.orchestra.run.vm09.stdout:(108/140): python3-cherrypy-18.6.1-2.el9.noarch 56 MB/s | 358 kB 00:00 2026-03-09T18:24:44.570 INFO:teuthology.orchestra.run.vm09.stdout:(109/140): python3-google-auth-2.45.0-1.el9.noa 43 MB/s | 254 kB 00:00 2026-03-09T18:24:44.598 INFO:teuthology.orchestra.run.vm09.stdout:(110/140): python3-grpcio-1.46.7-10.el9.x86_64. 73 MB/s | 2.0 MB 00:00 2026-03-09T18:24:44.602 INFO:teuthology.orchestra.run.vm09.stdout:(111/140): python3-grpcio-tools-1.46.7-10.el9.x 36 MB/s | 144 kB 00:00 2026-03-09T18:24:44.605 INFO:teuthology.orchestra.run.vm09.stdout:(112/140): python3-jaraco-8.2.1-3.el9.noarch.rp 4.3 MB/s | 11 kB 00:00 2026-03-09T18:24:44.607 INFO:teuthology.orchestra.run.vm09.stdout:(113/140): python3-jaraco-classes-3.2.1-5.el9.n 7.3 MB/s | 18 kB 00:00 2026-03-09T18:24:44.610 INFO:teuthology.orchestra.run.vm09.stdout:(114/140): python3-jaraco-collections-3.0.0-8.e 9.3 MB/s | 23 kB 00:00 2026-03-09T18:24:44.612 INFO:teuthology.orchestra.run.vm09.stdout:(115/140): python3-jaraco-context-6.0.1-3.el9.n 8.4 MB/s | 20 kB 00:00 2026-03-09T18:24:44.615 INFO:teuthology.orchestra.run.vm09.stdout:(116/140): python3-jaraco-functools-3.5.0-2.el9 8.4 MB/s | 19 kB 00:00 2026-03-09T18:24:44.618 INFO:teuthology.orchestra.run.vm09.stdout:(117/140): python3-jaraco-text-4.0.0-2.el9.noar 7.8 MB/s | 26 kB 00:00 2026-03-09T18:24:44.633 INFO:teuthology.orchestra.run.vm09.stdout:(118/140): python3-kubernetes-26.1.0-3.el9.noar 72 MB/s | 1.0 MB 00:00 2026-03-09T18:24:44.638 INFO:teuthology.orchestra.run.vm09.stdout:(119/140): python3-logutils-0.3.5-21.el9.noarch 9.7 MB/s | 46 kB 00:00 2026-03-09T18:24:44.641 INFO:teuthology.orchestra.run.vm09.stdout:(120/140): python3-more-itertools-8.12.0-2.el9. 23 MB/s | 79 kB 00:00 2026-03-09T18:24:44.645 INFO:teuthology.orchestra.run.vm09.stdout:(121/140): python3-natsort-7.1.1-5.el9.noarch.r 15 MB/s | 58 kB 00:00 2026-03-09T18:24:44.651 INFO:teuthology.orchestra.run.vm09.stdout:(122/140): python3-pecan-1.4.2-3.el9.noarch.rpm 48 MB/s | 272 kB 00:00 2026-03-09T18:24:44.656 INFO:teuthology.orchestra.run.vm09.stdout:(123/140): python3-portend-3.1.0-2.el9.noarch.r 3.9 MB/s | 16 kB 00:00 2026-03-09T18:24:44.660 INFO:teuthology.orchestra.run.vm09.stdout:(124/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 21 MB/s | 90 kB 00:00 2026-03-09T18:24:44.663 INFO:teuthology.orchestra.run.vm09.stdout:(125/140): python3-repoze-lru-0.7-16.el9.noarch 12 MB/s | 31 kB 00:00 2026-03-09T18:24:44.668 INFO:teuthology.orchestra.run.vm09.stdout:(126/140): python3-routes-2.5.1-5.el9.noarch.rp 42 MB/s | 188 kB 00:00 2026-03-09T18:24:44.671 INFO:teuthology.orchestra.run.vm09.stdout:(127/140): python3-rsa-4.9-2.el9.noarch.rpm 19 MB/s | 59 kB 00:00 2026-03-09T18:24:44.674 INFO:teuthology.orchestra.run.vm09.stdout:(128/140): python3-tempora-5.0.0-2.el9.noarch.r 14 MB/s | 36 kB 00:00 2026-03-09T18:24:44.679 INFO:teuthology.orchestra.run.vm09.stdout:(129/140): python3-typing-extensions-4.15.0-1.e 21 MB/s | 86 kB 00:00 2026-03-09T18:24:44.685 INFO:teuthology.orchestra.run.vm09.stdout:(130/140): python3-webob-1.8.8-2.el9.noarch.rpm 41 MB/s | 230 kB 00:00 2026-03-09T18:24:44.689 INFO:teuthology.orchestra.run.vm09.stdout:(131/140): python3-websocket-client-1.2.3-2.el9 20 MB/s | 90 kB 00:00 2026-03-09T18:24:44.697 INFO:teuthology.orchestra.run.vm09.stdout:(132/140): python3-werkzeug-2.0.3-3.el9.1.noarc 58 MB/s | 427 kB 00:00 2026-03-09T18:24:44.699 INFO:teuthology.orchestra.run.vm09.stdout:(133/140): python3-xmltodict-0.12.0-15.el9.noar 8.9 MB/s | 22 kB 00:00 2026-03-09T18:24:44.702 INFO:teuthology.orchestra.run.vm09.stdout:(134/140): python3-zc-lockfile-2.0-10.el9.noarc 8.1 MB/s | 20 kB 00:00 2026-03-09T18:24:44.715 INFO:teuthology.orchestra.run.vm09.stdout:(135/140): re2-20211101-20.el9.x86_64.rpm 15 MB/s | 191 kB 00:00 2026-03-09T18:24:44.755 INFO:teuthology.orchestra.run.vm09.stdout:(136/140): thrift-0.15.0-4.el9.x86_64.rpm 40 MB/s | 1.6 MB 00:00 2026-03-09T18:24:44.755 INFO:teuthology.orchestra.run.vm04.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 3.7 MB/s | 22 MB 00:05 2026-03-09T18:24:44.759 INFO:teuthology.orchestra.run.vm09.stdout:(137/140): protobuf-compiler-3.14.0-17.el9.x86_ 2.3 MB/s | 862 kB 00:00 2026-03-09T18:24:44.870 INFO:teuthology.orchestra.run.vm04.stdout:(9/140): ceph-selinux-19.2.3-678.ge911bdeb.el9. 219 kB/s | 25 kB 00:00 2026-03-09T18:24:45.135 INFO:teuthology.orchestra.run.vm04.stdout:(10/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9 5.0 MB/s | 11 MB 00:02 2026-03-09T18:24:45.254 INFO:teuthology.orchestra.run.vm04.stdout:(11/140): libcephfs-devel-19.2.3-678.ge911bdeb. 284 kB/s | 34 kB 00:00 2026-03-09T18:24:45.608 INFO:teuthology.orchestra.run.vm04.stdout:(12/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 2.8 MB/s | 1.0 MB 00:00 2026-03-09T18:24:45.728 INFO:teuthology.orchestra.run.vm04.stdout:(13/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-09T18:24:45.805 INFO:teuthology.orchestra.run.vm09.stdout:(138/140): librados2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 3.4 MB 00:01 2026-03-09T18:24:45.839 INFO:teuthology.orchestra.run.vm09.stdout:(139/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 3.2 MB 00:01 2026-03-09T18:24:45.848 INFO:teuthology.orchestra.run.vm04.stdout:(14/140): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-09T18:24:46.016 INFO:teuthology.orchestra.run.vm09.stdout:(140/140): ceph-test-19.2.3-678.ge911bdeb.el9.x 7.9 MB/s | 50 MB 00:06 2026-03-09T18:24:46.019 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:24:46.019 INFO:teuthology.orchestra.run.vm09.stdout:Total 16 MB/s | 211 MB 00:13 2026-03-09T18:24:46.084 INFO:teuthology.orchestra.run.vm04.stdout:(15/140): libradosstriper1-19.2.3-678.ge911bdeb 2.1 MB/s | 503 kB 00:00 2026-03-09T18:24:46.751 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:24:46.814 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:24:46.814 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:24:47.388 INFO:teuthology.orchestra.run.vm04.stdout:(16/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 4.1 MB/s | 5.4 MB 00:01 2026-03-09T18:24:47.506 INFO:teuthology.orchestra.run.vm04.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 382 kB/s | 45 kB 00:00 2026-03-09T18:24:47.629 INFO:teuthology.orchestra.run.vm04.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-09T18:24:47.692 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:24:47.692 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:24:47.748 INFO:teuthology.orchestra.run.vm04.stdout:(19/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-09T18:24:47.869 INFO:teuthology.orchestra.run.vm04.stdout:(20/140): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-09T18:24:47.991 INFO:teuthology.orchestra.run.vm04.stdout:(21/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-09T18:24:48.140 INFO:teuthology.orchestra.run.vm04.stdout:(22/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 670 kB/s | 100 kB 00:00 2026-03-09T18:24:48.259 INFO:teuthology.orchestra.run.vm04.stdout:(23/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 716 kB/s | 85 kB 00:00 2026-03-09T18:24:48.728 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:24:48.745 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-09T18:24:48.760 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-09T18:24:48.951 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-09T18:24:48.954 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:24:48.975 INFO:teuthology.orchestra.run.vm04.stdout:(24/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 4.4 MB/s | 3.1 MB 00:00 2026-03-09T18:24:49.022 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:24:49.024 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T18:24:49.057 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T18:24:49.068 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T18:24:49.072 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-09T18:24:49.076 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-09T18:24:49.089 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-09T18:24:49.094 INFO:teuthology.orchestra.run.vm04.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-09T18:24:49.097 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-09T18:24:49.108 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-09T18:24:49.110 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:24:49.151 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:24:49.153 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T18:24:49.172 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T18:24:49.212 INFO:teuthology.orchestra.run.vm04.stdout:(26/140): ceph-grafana-dashboards-19.2.3-678.ge 264 kB/s | 31 kB 00:00 2026-03-09T18:24:49.213 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-09T18:24:49.257 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-09T18:24:49.263 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-09T18:24:49.269 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-09T18:24:49.274 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-09T18:24:49.304 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-09T18:24:49.315 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-09T18:24:49.329 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-09T18:24:49.333 INFO:teuthology.orchestra.run.vm04.stdout:(27/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-09T18:24:49.339 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-09T18:24:49.344 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-09T18:24:49.351 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-09T18:24:49.388 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-09T18:24:49.408 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-09T18:24:49.413 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-09T18:24:49.421 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-09T18:24:49.424 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-09T18:24:49.458 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-09T18:24:49.466 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-09T18:24:49.477 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T18:24:49.494 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-09T18:24:49.504 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-09T18:24:49.538 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-09T18:24:49.545 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-09T18:24:49.555 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-09T18:24:49.588 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-09T18:24:49.660 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-09T18:24:49.679 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-09T18:24:49.691 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-09T18:24:49.698 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-09T18:24:49.705 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-09T18:24:49.716 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-09T18:24:49.723 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-09T18:24:49.729 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-09T18:24:49.748 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-09T18:24:49.780 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-09T18:24:49.789 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-09T18:24:49.797 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-09T18:24:49.812 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-09T18:24:49.828 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-09T18:24:49.843 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-09T18:24:49.918 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-09T18:24:49.928 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-09T18:24:49.941 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-09T18:24:49.995 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-09T18:24:50.166 INFO:teuthology.orchestra.run.vm04.stdout:(28/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 4.6 MB/s | 3.8 MB 00:00 2026-03-09T18:24:50.423 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-09T18:24:50.443 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-09T18:24:50.451 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-09T18:24:50.462 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-09T18:24:50.472 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-09T18:24:50.479 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-09T18:24:50.484 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-09T18:24:50.494 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-09T18:24:50.498 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-09T18:24:50.501 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-09T18:24:50.537 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-09T18:24:50.596 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-09T18:24:50.612 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-09T18:24:50.674 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-09T18:24:50.716 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-09T18:24:50.731 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-09T18:24:50.743 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-09T18:24:50.750 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-09T18:24:50.797 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-09T18:24:51.005 INFO:teuthology.orchestra.run.vm04.stdout:(29/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 1.9 MB/s | 17 MB 00:08 2026-03-09T18:24:51.100 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-09T18:24:51.135 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-09T18:24:51.143 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-09T18:24:51.214 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-09T18:24:51.218 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-09T18:24:51.239 INFO:teuthology.orchestra.run.vm04.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 1.1 MB/s | 253 kB 00:00 2026-03-09T18:24:51.248 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-09T18:24:51.353 INFO:teuthology.orchestra.run.vm04.stdout:(31/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 431 kB/s | 49 kB 00:00 2026-03-09T18:24:51.467 INFO:teuthology.orchestra.run.vm04.stdout:(32/140): ceph-prometheus-alerts-19.2.3-678.ge9 147 kB/s | 17 kB 00:00 2026-03-09T18:24:51.694 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-09T18:24:51.695 INFO:teuthology.orchestra.run.vm04.stdout:(33/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 1.3 MB/s | 299 kB 00:00 2026-03-09T18:24:51.828 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-09T18:24:51.846 INFO:teuthology.orchestra.run.vm04.stdout:(34/140): ceph-mgr-diskprediction-local-19.2.3- 4.4 MB/s | 7.4 MB 00:01 2026-03-09T18:24:51.949 INFO:teuthology.orchestra.run.vm04.stdout:(35/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 3.3 MB/s | 351 kB 00:00 2026-03-09T18:24:51.967 INFO:teuthology.orchestra.run.vm04.stdout:(36/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 2.3 MB/s | 40 kB 00:00 2026-03-09T18:24:51.985 INFO:teuthology.orchestra.run.vm04.stdout:(37/140): libconfig-1.7.2-9.el9.x86_64.rpm 3.9 MB/s | 72 kB 00:00 2026-03-09T18:24:52.070 INFO:teuthology.orchestra.run.vm04.stdout:(38/140): libgfortran-11.5.0-14.el9.x86_64.rpm 9.1 MB/s | 794 kB 00:00 2026-03-09T18:24:52.090 INFO:teuthology.orchestra.run.vm04.stdout:(39/140): libquadmath-11.5.0-14.el9.x86_64.rpm 9.3 MB/s | 184 kB 00:00 2026-03-09T18:24:52.108 INFO:teuthology.orchestra.run.vm04.stdout:(40/140): mailcap-2.1.49-5.el9.noarch.rpm 1.8 MB/s | 33 kB 00:00 2026-03-09T18:24:52.127 INFO:teuthology.orchestra.run.vm04.stdout:(41/140): pciutils-3.7.0-7.el9.x86_64.rpm 5.0 MB/s | 93 kB 00:00 2026-03-09T18:24:52.151 INFO:teuthology.orchestra.run.vm04.stdout:(42/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 1.7 MB/s | 769 kB 00:00 2026-03-09T18:24:52.163 INFO:teuthology.orchestra.run.vm04.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 6.8 MB/s | 253 kB 00:00 2026-03-09T18:24:52.183 INFO:teuthology.orchestra.run.vm04.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 5.3 MB/s | 106 kB 00:00 2026-03-09T18:24:52.202 INFO:teuthology.orchestra.run.vm04.stdout:(45/140): python3-pycparser-2.20-6.el9.noarch.r 6.8 MB/s | 135 kB 00:00 2026-03-09T18:24:52.222 INFO:teuthology.orchestra.run.vm04.stdout:(46/140): python3-requests-2.25.1-10.el9.noarch 6.6 MB/s | 126 kB 00:00 2026-03-09T18:24:52.241 INFO:teuthology.orchestra.run.vm04.stdout:(47/140): python3-urllib3-1.26.5-7.el9.noarch.r 11 MB/s | 218 kB 00:00 2026-03-09T18:24:52.261 INFO:teuthology.orchestra.run.vm04.stdout:(48/140): unzip-6.0-59.el9.x86_64.rpm 9.2 MB/s | 182 kB 00:00 2026-03-09T18:24:52.296 INFO:teuthology.orchestra.run.vm04.stdout:(49/140): zip-3.0-35.el9.x86_64.rpm 7.5 MB/s | 266 kB 00:00 2026-03-09T18:24:52.338 INFO:teuthology.orchestra.run.vm04.stdout:(50/140): python3-cryptography-36.0.1-5.el9.x86 6.7 MB/s | 1.2 MB 00:00 2026-03-09T18:24:52.424 INFO:teuthology.orchestra.run.vm04.stdout:(51/140): flexiblas-3.0.4-9.el9.x86_64.rpm 345 kB/s | 30 kB 00:00 2026-03-09T18:24:52.438 INFO:teuthology.orchestra.run.vm04.stdout:(52/140): boost-program-options-1.75.0-13.el9.x 733 kB/s | 104 kB 00:00 2026-03-09T18:24:52.468 INFO:teuthology.orchestra.run.vm04.stdout:(53/140): flexiblas-openblas-openmp-3.0.4-9.el9 506 kB/s | 15 kB 00:00 2026-03-09T18:24:52.525 INFO:teuthology.orchestra.run.vm04.stdout:(54/140): libnbd-1.20.3-4.el9.x86_64.rpm 2.8 MB/s | 164 kB 00:00 2026-03-09T18:24:52.556 INFO:teuthology.orchestra.run.vm04.stdout:(55/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 5.0 MB/s | 160 kB 00:00 2026-03-09T18:24:52.587 INFO:teuthology.orchestra.run.vm04.stdout:(56/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.5 MB/s | 45 kB 00:00 2026-03-09T18:24:52.650 INFO:teuthology.orchestra.run.vm04.stdout:(57/140): librdkafka-1.6.1-102.el9.x86_64.rpm 10 MB/s | 662 kB 00:00 2026-03-09T18:24:52.682 INFO:teuthology.orchestra.run.vm04.stdout:(58/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 7.6 MB/s | 246 kB 00:00 2026-03-09T18:24:52.691 INFO:teuthology.orchestra.run.vm04.stdout:(59/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 11 MB/s | 3.0 MB 00:00 2026-03-09T18:24:52.714 INFO:teuthology.orchestra.run.vm04.stdout:(60/140): libxslt-1.1.34-12.el9.x86_64.rpm 7.2 MB/s | 233 kB 00:00 2026-03-09T18:24:52.724 INFO:teuthology.orchestra.run.vm04.stdout:(61/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 8.8 MB/s | 292 kB 00:00 2026-03-09T18:24:52.736 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-09T18:24:52.746 INFO:teuthology.orchestra.run.vm04.stdout:(62/140): lua-5.4.4-4.el9.x86_64.rpm 5.8 MB/s | 188 kB 00:00 2026-03-09T18:24:52.755 INFO:teuthology.orchestra.run.vm04.stdout:(63/140): openblas-0.3.29-1.el9.x86_64.rpm 1.3 MB/s | 42 kB 00:00 2026-03-09T18:24:52.768 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-09T18:24:52.776 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-09T18:24:52.782 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-09T18:24:52.844 INFO:teuthology.orchestra.run.vm04.stdout:(64/140): protobuf-3.14.0-17.el9.x86_64.rpm 11 MB/s | 1.0 MB 00:00 2026-03-09T18:24:53.024 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-09T18:24:53.032 INFO:teuthology.orchestra.run.vm04.stdout:(65/140): openblas-openmp-0.3.29-1.el9.x86_64.r 18 MB/s | 5.3 MB 00:00 2026-03-09T18:24:53.034 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T18:24:53.068 INFO:teuthology.orchestra.run.vm04.stdout:(66/140): python3-devel-3.9.25-3.el9.x86_64.rpm 6.7 MB/s | 244 kB 00:00 2026-03-09T18:24:53.075 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T18:24:53.079 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-09T18:24:53.090 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-09T18:24:53.108 INFO:teuthology.orchestra.run.vm04.stdout:(67/140): python3-iniconfig-1.1.1-7.el9.noarch. 438 kB/s | 17 kB 00:00 2026-03-09T18:24:53.141 INFO:teuthology.orchestra.run.vm04.stdout:(68/140): python3-jinja2-2.11.3-8.el9.noarch.rp 7.4 MB/s | 249 kB 00:00 2026-03-09T18:24:53.171 INFO:teuthology.orchestra.run.vm04.stdout:(69/140): python3-jmespath-1.0.1-1.el9.noarch.r 1.6 MB/s | 48 kB 00:00 2026-03-09T18:24:53.202 INFO:teuthology.orchestra.run.vm04.stdout:(70/140): python3-libstoragemgmt-1.10.1-1.el9.x 5.6 MB/s | 177 kB 00:00 2026-03-09T18:24:53.234 INFO:teuthology.orchestra.run.vm04.stdout:(71/140): python3-mako-1.1.4-6.el9.noarch.rpm 5.3 MB/s | 172 kB 00:00 2026-03-09T18:24:53.279 INFO:teuthology.orchestra.run.vm04.stdout:(72/140): python3-babel-2.9.1-2.el9.noarch.rpm 14 MB/s | 6.0 MB 00:00 2026-03-09T18:24:53.280 INFO:teuthology.orchestra.run.vm04.stdout:(73/140): python3-markupsafe-1.1.1-12.el9.x86_6 777 kB/s | 35 kB 00:00 2026-03-09T18:24:53.373 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-09T18:24:53.376 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T18:24:53.398 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T18:24:53.401 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-09T18:24:53.667 INFO:teuthology.orchestra.run.vm04.stdout:(74/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.1 MB/s | 442 kB 00:00 2026-03-09T18:24:53.698 INFO:teuthology.orchestra.run.vm04.stdout:(75/140): python3-packaging-20.9-5.el9.noarch.r 2.5 MB/s | 77 kB 00:00 2026-03-09T18:24:53.753 INFO:teuthology.orchestra.run.vm04.stdout:(76/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 13 MB/s | 6.1 MB 00:00 2026-03-09T18:24:53.755 INFO:teuthology.orchestra.run.vm04.stdout:(77/140): python3-pluggy-0.13.1-7.el9.noarch.rp 726 kB/s | 41 kB 00:00 2026-03-09T18:24:53.786 INFO:teuthology.orchestra.run.vm04.stdout:(78/140): python3-protobuf-3.14.0-17.el9.noarch 8.1 MB/s | 267 kB 00:00 2026-03-09T18:24:53.818 INFO:teuthology.orchestra.run.vm04.stdout:(79/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.9 MB/s | 157 kB 00:00 2026-03-09T18:24:53.850 INFO:teuthology.orchestra.run.vm04.stdout:(80/140): python3-pyasn1-modules-0.4.8-7.el9.no 8.4 MB/s | 277 kB 00:00 2026-03-09T18:24:53.877 INFO:teuthology.orchestra.run.vm04.stdout:(81/140): python3-py-1.10.0-6.el9.noarch.rpm 3.8 MB/s | 477 kB 00:00 2026-03-09T18:24:53.906 INFO:teuthology.orchestra.run.vm04.stdout:(82/140): python3-requests-oauthlib-1.3.0-12.el 1.8 MB/s | 54 kB 00:00 2026-03-09T18:24:53.979 INFO:teuthology.orchestra.run.vm04.stdout:(83/140): python3-pytest-6.2.2-7.el9.noarch.rpm 3.9 MB/s | 519 kB 00:00 2026-03-09T18:24:54.009 INFO:teuthology.orchestra.run.vm04.stdout:(84/140): python3-toml-0.10.2-6.el9.noarch.rpm 1.4 MB/s | 42 kB 00:00 2026-03-09T18:24:54.041 INFO:teuthology.orchestra.run.vm04.stdout:(85/140): qatlib-25.08.0-2.el9.x86_64.rpm 7.3 MB/s | 240 kB 00:00 2026-03-09T18:24:54.071 INFO:teuthology.orchestra.run.vm04.stdout:(86/140): qatlib-service-25.08.0-2.el9.x86_64.r 1.2 MB/s | 37 kB 00:00 2026-03-09T18:24:54.102 INFO:teuthology.orchestra.run.vm04.stdout:(87/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.2 MB/s | 66 kB 00:00 2026-03-09T18:24:54.135 INFO:teuthology.orchestra.run.vm04.stdout:(88/140): socat-1.7.4.1-8.el9.x86_64.rpm 8.9 MB/s | 303 kB 00:00 2026-03-09T18:24:54.167 INFO:teuthology.orchestra.run.vm04.stdout:(89/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.0 MB/s | 64 kB 00:00 2026-03-09T18:24:54.221 INFO:teuthology.orchestra.run.vm04.stdout:(90/140): lua-devel-5.4.4-4.el9.x86_64.rpm 416 kB/s | 22 kB 00:00 2026-03-09T18:24:54.313 INFO:teuthology.orchestra.run.vm04.stdout:(91/140): protobuf-compiler-3.14.0-17.el9.x86_6 9.2 MB/s | 862 kB 00:00 2026-03-09T18:24:54.329 INFO:teuthology.orchestra.run.vm04.stdout:(92/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 35 MB/s | 551 kB 00:00 2026-03-09T18:24:54.337 INFO:teuthology.orchestra.run.vm04.stdout:(93/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 39 MB/s | 308 kB 00:00 2026-03-09T18:24:54.340 INFO:teuthology.orchestra.run.vm04.stdout:(94/140): grpc-data-1.46.7-10.el9.noarch.rpm 8.4 MB/s | 19 kB 00:00 2026-03-09T18:24:54.398 INFO:teuthology.orchestra.run.vm04.stdout:(95/140): libarrow-9.0.0-15.el9.x86_64.rpm 76 MB/s | 4.4 MB 00:00 2026-03-09T18:24:54.401 INFO:teuthology.orchestra.run.vm04.stdout:(96/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 11 MB/s | 25 kB 00:00 2026-03-09T18:24:54.403 INFO:teuthology.orchestra.run.vm04.stdout:(97/140): liboath-2.6.12-1.el9.x86_64.rpm 20 MB/s | 49 kB 00:00 2026-03-09T18:24:54.406 INFO:teuthology.orchestra.run.vm04.stdout:(98/140): libunwind-1.6.2-1.el9.x86_64.rpm 25 MB/s | 67 kB 00:00 2026-03-09T18:24:54.410 INFO:teuthology.orchestra.run.vm04.stdout:(99/140): luarocks-3.9.2-5.el9.noarch.rpm 44 MB/s | 151 kB 00:00 2026-03-09T18:24:54.422 INFO:teuthology.orchestra.run.vm04.stdout:(100/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 71 MB/s | 838 kB 00:00 2026-03-09T18:24:54.430 INFO:teuthology.orchestra.run.vm04.stdout:(101/140): python3-asyncssh-2.13.2-5.el9.noarch 67 MB/s | 548 kB 00:00 2026-03-09T18:24:54.432 INFO:teuthology.orchestra.run.vm04.stdout:(102/140): python3-autocommand-2.2.2-8.el9.noar 13 MB/s | 29 kB 00:00 2026-03-09T18:24:54.435 INFO:teuthology.orchestra.run.vm04.stdout:(103/140): python3-backports-tarfile-1.2.0-1.el 23 MB/s | 60 kB 00:00 2026-03-09T18:24:54.438 INFO:teuthology.orchestra.run.vm04.stdout:(104/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 20 MB/s | 43 kB 00:00 2026-03-09T18:24:54.440 INFO:teuthology.orchestra.run.vm04.stdout:(105/140): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-09T18:24:54.442 INFO:teuthology.orchestra.run.vm04.stdout:(106/140): python3-certifi-2023.05.07-4.el9.noa 6.6 MB/s | 14 kB 00:00 2026-03-09T18:24:54.446 INFO:teuthology.orchestra.run.vm04.stdout:(107/140): python3-cheroot-10.0.1-4.el9.noarch. 47 MB/s | 173 kB 00:00 2026-03-09T18:24:54.453 INFO:teuthology.orchestra.run.vm04.stdout:(108/140): python3-cherrypy-18.6.1-2.el9.noarch 54 MB/s | 358 kB 00:00 2026-03-09T18:24:54.458 INFO:teuthology.orchestra.run.vm04.stdout:(109/140): python3-google-auth-2.45.0-1.el9.noa 53 MB/s | 254 kB 00:00 2026-03-09T18:24:54.486 INFO:teuthology.orchestra.run.vm04.stdout:(110/140): python3-grpcio-1.46.7-10.el9.x86_64. 76 MB/s | 2.0 MB 00:00 2026-03-09T18:24:54.490 INFO:teuthology.orchestra.run.vm04.stdout:(111/140): python3-grpcio-tools-1.46.7-10.el9.x 37 MB/s | 144 kB 00:00 2026-03-09T18:24:54.492 INFO:teuthology.orchestra.run.vm04.stdout:(112/140): python3-jaraco-8.2.1-3.el9.noarch.rp 4.0 MB/s | 11 kB 00:00 2026-03-09T18:24:54.495 INFO:teuthology.orchestra.run.vm04.stdout:(113/140): python3-jaraco-classes-3.2.1-5.el9.n 8.6 MB/s | 18 kB 00:00 2026-03-09T18:24:54.497 INFO:teuthology.orchestra.run.vm04.stdout:(114/140): python3-jaraco-collections-3.0.0-8.e 11 MB/s | 23 kB 00:00 2026-03-09T18:24:54.499 INFO:teuthology.orchestra.run.vm04.stdout:(115/140): python3-jaraco-context-6.0.1-3.el9.n 9.1 MB/s | 20 kB 00:00 2026-03-09T18:24:54.501 INFO:teuthology.orchestra.run.vm04.stdout:(116/140): python3-jaraco-functools-3.5.0-2.el9 10 MB/s | 19 kB 00:00 2026-03-09T18:24:54.504 INFO:teuthology.orchestra.run.vm04.stdout:(117/140): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-09T18:24:54.518 INFO:teuthology.orchestra.run.vm04.stdout:(118/140): python3-kubernetes-26.1.0-3.el9.noar 73 MB/s | 1.0 MB 00:00 2026-03-09T18:24:54.521 INFO:teuthology.orchestra.run.vm04.stdout:(119/140): python3-logutils-0.3.5-21.el9.noarch 18 MB/s | 46 kB 00:00 2026-03-09T18:24:54.524 INFO:teuthology.orchestra.run.vm04.stdout:(120/140): python3-more-itertools-8.12.0-2.el9. 29 MB/s | 79 kB 00:00 2026-03-09T18:24:54.527 INFO:teuthology.orchestra.run.vm04.stdout:(121/140): python3-natsort-7.1.1-5.el9.noarch.r 19 MB/s | 58 kB 00:00 2026-03-09T18:24:54.533 INFO:teuthology.orchestra.run.vm04.stdout:(122/140): python3-pecan-1.4.2-3.el9.noarch.rpm 49 MB/s | 272 kB 00:00 2026-03-09T18:24:54.543 INFO:teuthology.orchestra.run.vm04.stdout:(123/140): python3-portend-3.1.0-2.el9.noarch.r 1.7 MB/s | 16 kB 00:00 2026-03-09T18:24:54.548 INFO:teuthology.orchestra.run.vm04.stdout:(124/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 20 MB/s | 90 kB 00:00 2026-03-09T18:24:54.553 INFO:teuthology.orchestra.run.vm04.stdout:(125/140): python3-repoze-lru-0.7-16.el9.noarch 6.0 MB/s | 31 kB 00:00 2026-03-09T18:24:54.558 INFO:teuthology.orchestra.run.vm04.stdout:(126/140): python3-routes-2.5.1-5.el9.noarch.rp 40 MB/s | 188 kB 00:00 2026-03-09T18:24:54.561 INFO:teuthology.orchestra.run.vm04.stdout:(127/140): python3-rsa-4.9-2.el9.noarch.rpm 23 MB/s | 59 kB 00:00 2026-03-09T18:24:54.563 INFO:teuthology.orchestra.run.vm04.stdout:(128/140): python3-tempora-5.0.0-2.el9.noarch.r 16 MB/s | 36 kB 00:00 2026-03-09T18:24:54.566 INFO:teuthology.orchestra.run.vm04.stdout:(129/140): python3-typing-extensions-4.15.0-1.e 32 MB/s | 86 kB 00:00 2026-03-09T18:24:54.571 INFO:teuthology.orchestra.run.vm04.stdout:(130/140): python3-webob-1.8.8-2.el9.noarch.rpm 44 MB/s | 230 kB 00:00 2026-03-09T18:24:54.575 INFO:teuthology.orchestra.run.vm04.stdout:(131/140): python3-websocket-client-1.2.3-2.el9 24 MB/s | 90 kB 00:00 2026-03-09T18:24:54.584 INFO:teuthology.orchestra.run.vm04.stdout:(132/140): python3-werkzeug-2.0.3-3.el9.1.noarc 47 MB/s | 427 kB 00:00 2026-03-09T18:24:54.587 INFO:teuthology.orchestra.run.vm04.stdout:(133/140): python3-xmltodict-0.12.0-15.el9.noar 9.4 MB/s | 22 kB 00:00 2026-03-09T18:24:54.589 INFO:teuthology.orchestra.run.vm04.stdout:(134/140): python3-zc-lockfile-2.0-10.el9.noarc 9.8 MB/s | 20 kB 00:00 2026-03-09T18:24:54.593 INFO:teuthology.orchestra.run.vm04.stdout:(135/140): re2-20211101-20.el9.x86_64.rpm 49 MB/s | 191 kB 00:00 2026-03-09T18:24:54.599 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:24:54.604 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:24:54.617 INFO:teuthology.orchestra.run.vm04.stdout:(136/140): thrift-0.15.0-4.el9.x86_64.rpm 68 MB/s | 1.6 MB 00:00 2026-03-09T18:24:54.629 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:24:54.648 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-09T18:24:54.671 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-09T18:24:54.763 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-09T18:24:54.779 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-09T18:24:54.809 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-09T18:24:54.867 INFO:teuthology.orchestra.run.vm04.stdout:(137/140): ceph-test-19.2.3-678.ge911bdeb.el9.x 5.0 MB/s | 50 MB 00:09 2026-03-09T18:24:54.906 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-09T18:24:54.968 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-09T18:24:54.978 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-09T18:24:54.984 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-09T18:24:54.990 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-09T18:24:54.995 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-09T18:24:54.998 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T18:24:55.016 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T18:24:55.337 INFO:teuthology.orchestra.run.vm04.stdout:(138/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 13 MB/s | 19 MB 00:01 2026-03-09T18:24:55.340 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-09T18:24:55.348 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T18:24:55.392 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T18:24:55.392 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T18:24:55.392 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T18:24:55.392 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:24:55.397 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T18:24:55.933 INFO:teuthology.orchestra.run.vm04.stdout:(139/140): librados2-19.2.3-678.ge911bdeb.el9.x 2.6 MB/s | 3.4 MB 00:01 2026-03-09T18:24:56.802 INFO:teuthology.orchestra.run.vm04.stdout:(140/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 1.6 MB/s | 3.2 MB 00:01 2026-03-09T18:24:56.805 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:24:56.806 INFO:teuthology.orchestra.run.vm04.stdout:Total 11 MB/s | 211 MB 00:18 2026-03-09T18:24:57.502 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:24:57.552 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:24:57.553 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:24:58.400 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:24:58.400 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:24:59.319 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:24:59.334 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-09T18:24:59.348 INFO:teuthology.orchestra.run.vm04.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-09T18:24:59.528 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-09T18:24:59.530 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:24:59.595 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:24:59.597 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T18:24:59.628 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T18:24:59.638 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T18:24:59.642 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-09T18:24:59.645 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-09T18:24:59.657 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-09T18:24:59.665 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-09T18:24:59.675 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-09T18:24:59.677 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:24:59.715 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:24:59.716 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T18:24:59.734 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T18:24:59.770 INFO:teuthology.orchestra.run.vm04.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-09T18:24:59.812 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-09T18:24:59.818 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-09T18:24:59.826 INFO:teuthology.orchestra.run.vm04.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-09T18:24:59.832 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-09T18:24:59.859 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-09T18:24:59.874 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-09T18:24:59.889 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-09T18:24:59.896 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-09T18:24:59.901 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-09T18:24:59.908 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-09T18:24:59.939 INFO:teuthology.orchestra.run.vm04.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-09T18:24:59.958 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-09T18:24:59.964 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-09T18:24:59.972 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-09T18:24:59.975 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-09T18:25:00.008 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-09T18:25:00.016 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-09T18:25:00.028 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T18:25:00.043 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-09T18:25:00.053 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-09T18:25:00.084 INFO:teuthology.orchestra.run.vm04.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-09T18:25:00.092 INFO:teuthology.orchestra.run.vm04.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-09T18:25:00.101 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-09T18:25:00.133 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-09T18:25:00.201 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-09T18:25:00.219 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-09T18:25:00.230 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-09T18:25:00.236 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-09T18:25:00.254 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-09T18:25:00.281 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-09T18:25:00.291 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-09T18:25:00.297 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-09T18:25:00.316 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-09T18:25:00.348 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-09T18:25:00.355 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-09T18:25:00.364 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-09T18:25:00.379 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-09T18:25:00.393 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-09T18:25:00.407 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-09T18:25:00.483 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-09T18:25:00.493 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-09T18:25:00.505 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-09T18:25:00.561 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-09T18:25:00.999 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-09T18:25:01.242 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-09T18:25:01.304 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-09T18:25:01.441 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-09T18:25:01.507 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-09T18:25:01.637 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-09T18:25:01.705 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-09T18:25:01.812 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-09T18:25:01.824 INFO:teuthology.orchestra.run.vm04.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-09T18:25:01.826 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-09T18:25:01.864 INFO:teuthology.orchestra.run.vm04.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-09T18:25:01.926 INFO:teuthology.orchestra.run.vm04.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-09T18:25:01.949 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-09T18:25:02.009 INFO:teuthology.orchestra.run.vm04.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-09T18:25:02.049 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-09T18:25:02.065 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-09T18:25:02.078 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-09T18:25:02.088 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-09T18:25:02.134 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T18:25:02.342 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:02.424 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-09T18:25:02.469 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-09T18:25:02.497 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T18:25:02.501 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T18:25:02.532 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:02.577 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-09T18:25:02.582 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-09T18:25:02.609 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-09T18:25:02.798 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T18:25:02.827 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:02.838 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-09T18:25:02.840 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-09T18:25:02.861 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:02.861 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-09T18:25:02.861 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T18:25:02.861 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T18:25:02.861 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:02.873 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:02.902 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:02.902 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T18:25:02.902 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:02.952 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-09T18:25:03.041 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-09T18:25:03.117 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-09T18:25:03.122 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T18:25:03.137 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T18:25:03.137 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:03.137 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T18:25:03.137 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:03.151 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-09T18:25:03.978 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T18:25:04.007 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:04.047 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-09T18:25:04.076 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T18:25:04.079 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T18:25:04.082 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-09T18:25:04.086 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-09T18:25:04.091 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-09T18:25:04.098 INFO:teuthology.orchestra.run.vm04.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-09T18:25:04.114 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-09T18:25:04.117 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T18:25:04.273 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-09T18:25:04.277 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T18:25:04.313 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T18:25:04.317 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-09T18:25:04.326 INFO:teuthology.orchestra.run.vm04.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-09T18:25:04.614 INFO:teuthology.orchestra.run.vm04.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-09T18:25:04.616 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T18:25:04.639 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T18:25:04.640 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-09T18:25:04.746 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T18:25:04.752 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T18:25:05.310 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T18:25:05.313 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T18:25:05.383 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T18:25:05.443 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-09T18:25:05.446 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T18:25:05.470 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:05.485 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T18:25:05.500 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T18:25:05.842 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:25:05.938 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:25:05.962 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T18:25:05.980 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-09T18:25:06.005 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-09T18:25:06.035 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-09T18:25:06.038 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T18:25:06.064 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T18:25:06.064 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:06.064 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T18:25:06.065 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:25:06.065 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:25:06.065 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:06.077 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T18:25:06.102 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T18:25:06.102 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:06.102 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T18:25:06.102 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:06.102 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-09T18:25:06.119 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-09T18:25:06.152 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-09T18:25:06.196 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-09T18:25:06.261 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-09T18:25:06.269 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T18:25:06.274 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-09T18:25:06.283 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-09T18:25:06.290 INFO:teuthology.orchestra.run.vm04.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T18:25:06.294 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:06.295 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-09T18:25:06.297 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T18:25:06.313 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T18:25:06.652 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-09T18:25:06.659 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T18:25:06.715 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T18:25:06.715 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T18:25:06.715 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T18:25:06.715 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:06.720 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T18:25:09.082 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-09T18:25:09.094 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-09T18:25:09.159 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-09T18:25:09.169 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-09T18:25:09.231 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-09T18:25:09.242 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T18:25:09.251 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-09T18:25:09.251 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T18:25:09.270 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T18:25:09.270 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-09T18:25:10.747 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-09T18:25:10.748 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-09T18:25:10.749 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-09T18:25:10.751 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-09T18:25:10.752 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-09T18:25:10.754 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-09T18:25:10.755 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.877 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:25:10.878 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T18:25:10.879 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T18:25:10.880 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T18:25:10.881 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-09T18:25:10.881 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:25:10.881 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:25:10.977 DEBUG:teuthology.parallel:result is None 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T18:25:14.340 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:14.479 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T18:25:14.503 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:14.756 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T18:25:14.782 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:14.791 INFO:teuthology.orchestra.run.vm04.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-09T18:25:14.795 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-09T18:25:14.813 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:14.814 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'qat' with GID 994. 2026-03-09T18:25:14.814 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T18:25:14.814 INFO:teuthology.orchestra.run.vm04.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T18:25:14.814 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:14.825 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:14.854 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T18:25:14.854 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T18:25:14.854 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:14.898 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-09T18:25:14.983 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-09T18:25:14.989 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T18:25:15.007 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T18:25:15.007 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:15.007 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T18:25:15.007 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:15.873 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T18:25:15.903 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T18:25:15.904 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:15.904 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T18:25:15.904 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T18:25:15.904 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T18:25:15.904 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:15.973 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T18:25:15.978 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T18:25:15.984 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-09T18:25:16.010 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-09T18:25:16.014 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T18:25:16.612 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T18:25:16.619 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T18:25:17.201 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T18:25:17.204 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T18:25:17.279 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T18:25:17.350 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-09T18:25:17.353 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T18:25:17.384 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:17.399 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T18:25:17.415 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T18:25:18.000 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-09T18:25:18.003 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:25:18.030 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:18.042 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T18:25:18.068 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T18:25:18.068 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:18.068 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T18:25:18.068 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:18.255 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T18:25:18.282 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T18:25:18.283 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:25:18.283 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T18:25:18.283 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T18:25:18.283 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T18:25:18.283 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:21.435 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-09T18:25:21.448 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-09T18:25:21.507 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-09T18:25:21.517 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-09T18:25:21.583 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-09T18:25:21.599 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T18:25:21.603 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-09T18:25:21.604 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T18:25:21.623 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T18:25:21.623 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-09T18:25:23.132 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-09T18:25:23.133 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-09T18:25:23.134 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-09T18:25:23.135 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-09T18:25:23.136 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-09T18:25:23.137 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-09T18:25:23.138 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-09T18:25:23.138 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-09T18:25:23.138 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T18:25:23.138 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-09T18:25:23.138 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout:Upgraded: 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T18:25:23.247 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T18:25:23.248 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:25:23.249 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:25:23.367 DEBUG:teuthology.parallel:result is None 2026-03-09T18:25:23.367 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:25:23.984 DEBUG:teuthology.orchestra.run.vm04:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T18:25:24.007 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T18:25:24.007 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T18:25:24.007 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T18:25:24.008 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:25:24.612 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T18:25:24.634 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T18:25:24.634 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T18:25:24.634 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T18:25:24.635 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T18:25:24.635 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:25:24.635 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T18:25:24.663 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:25:24.663 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T18:25:24.702 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T18:25:24.702 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:25:24.702 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T18:25:24.731 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T18:25:24.797 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:25:24.797 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T18:25:24.824 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T18:25:24.895 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T18:25:24.895 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:25:24.895 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T18:25:24.920 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T18:25:24.985 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:25:24.985 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T18:25:25.014 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T18:25:25.083 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T18:25:25.084 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:25:25.084 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T18:25:25.109 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T18:25:25.174 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:25:25.175 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T18:25:25.202 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T18:25:25.269 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'but it is still running', 'overall HEALTH_', '\\(OSDMAP_FLAGS\\)', '\\(PG_', '\\(OSD_', '\\(OBJECT_', '\\(POOL_APP_NOT_ENABLED\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Cluster fsid is 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.104', 'mon.c': '[v2:192.168.123.104:3301,v1:192.168.123.104:6790]', 'mon.b': '192.168.123.109'} 2026-03-09T18:25:25.321 INFO:tasks.cephadm:First mon is mon.a on vm04 2026-03-09T18:25:25.321 INFO:tasks.cephadm:First mgr is y 2026-03-09T18:25:25.321 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T18:25:25.321 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-09T18:25:25.345 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-09T18:25:25.378 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T18:25:25.378 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T18:25:25.388 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T18:25:25.553 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:25:25.585 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-09T18:26:31.317 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-09T18:26:31.338 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T18:26:31.354 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-09T18:26:31.385 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-09T18:26:31.411 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-09T18:26:31.451 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-09T18:26:31.475 INFO:tasks.cephadm:Writing seed config... 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T18:26:31.475 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T18:26:31.476 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:26:31.476 DEBUG:teuthology.orchestra.run.vm04:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T18:26:31.508 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 5769e1c8-1be5-11f1-a591-591820987f3e mon election default strategy = 1 ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T18:26:31.509 DEBUG:teuthology.orchestra.run.vm04:mon.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service 2026-03-09T18:26:31.550 DEBUG:teuthology.orchestra.run.vm04:mgr.y> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service 2026-03-09T18:26:31.592 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T18:26:31.592 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 5769e1c8-1be5-11f1-a591-591820987f3e --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.104 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:26:31.737 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:26:31.738 INFO:teuthology.orchestra.run.vm04.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '5769e1c8-1be5-11f1-a591-591820987f3e', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.104', '--skip-admin-label'] 2026-03-09T18:26:31.738 INFO:teuthology.orchestra.run.vm04.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T18:26:31.738 INFO:teuthology.orchestra.run.vm04.stdout:Verifying podman|docker is present... 2026-03-09T18:26:31.758 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-09T18:26:31.758 INFO:teuthology.orchestra.run.vm04.stdout:Verifying lvm2 is present... 2026-03-09T18:26:31.758 INFO:teuthology.orchestra.run.vm04.stdout:Verifying time synchronization is in place... 2026-03-09T18:26:31.766 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T18:26:31.766 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:26:31.772 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T18:26:31.772 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-09T18:26:31.778 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-09T18:26:31.785 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-09T18:26:31.785 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-09T18:26:31.785 INFO:teuthology.orchestra.run.vm04.stdout:Repeating the final host check... 2026-03-09T18:26:31.804 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-09T18:26:31.804 INFO:teuthology.orchestra.run.vm04.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T18:26:31.805 INFO:teuthology.orchestra.run.vm04.stdout:systemctl is present 2026-03-09T18:26:31.805 INFO:teuthology.orchestra.run.vm04.stdout:lvcreate is present 2026-03-09T18:26:31.811 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T18:26:31.811 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:26:31.817 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T18:26:31.817 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-09T18:26:31.823 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-09T18:26:31.828 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-09T18:26:31.828 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-09T18:26:31.828 INFO:teuthology.orchestra.run.vm04.stdout:Host looks OK 2026-03-09T18:26:31.828 INFO:teuthology.orchestra.run.vm04.stdout:Cluster fsid: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:31.828 INFO:teuthology.orchestra.run.vm04.stdout:Acquiring lock 139955253504752 on /run/cephadm/5769e1c8-1be5-11f1-a591-591820987f3e.lock 2026-03-09T18:26:31.829 INFO:teuthology.orchestra.run.vm04.stdout:Lock 139955253504752 acquired on /run/cephadm/5769e1c8-1be5-11f1-a591-591820987f3e.lock 2026-03-09T18:26:31.829 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 3300 ... 2026-03-09T18:26:31.829 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 6789 ... 2026-03-09T18:26:31.829 INFO:teuthology.orchestra.run.vm04.stdout:Base mon IP(s) is [192.168.123.104:3300, 192.168.123.104:6789], mon addrv is [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T18:26:31.832 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.104 metric 100 2026-03-09T18:26:31.832 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.104 metric 100 2026-03-09T18:26:31.834 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T18:26:31.834 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:4/64 scope link noprefixroute 2026-03-09T18:26:31.836 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T18:26:31.837 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-09T18:26:31.837 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-09T18:26:31.837 INFO:teuthology.orchestra.run.vm04.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-09T18:26:31.837 INFO:teuthology.orchestra.run.vm04.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T18:26:31.838 INFO:teuthology.orchestra.run.vm04.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T18:26:33.531 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T18:26:34.329 INFO:teuthology.orchestra.run.vm04.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T18:26:34.329 INFO:teuthology.orchestra.run.vm04.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T18:26:34.329 INFO:teuthology.orchestra.run.vm04.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T18:26:34.455 INFO:teuthology.orchestra.run.vm04.stdout:stat: stdout 167 167 2026-03-09T18:26:34.455 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial keys... 2026-03-09T18:26:34.567 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDaEK9pDVtaIBAAjD1RnaVHs/uvdO6BOm1izQ== 2026-03-09T18:26:34.721 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDaEK9p4nhEKBAA9avFsfUfCftXJXK1vecjqw== 2026-03-09T18:26:34.833 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQDaEK9p+kxELxAAcC9yaqLZUGNq1mWmDE97qA== 2026-03-09T18:26:34.834 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial monmap... 2026-03-09T18:26:34.949 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:26:34.949 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T18:26:34.949 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:34.949 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:26:34.949 INFO:teuthology.orchestra.run.vm04.stdout:monmaptool for a [v2:192.168.123.104:3300,v1:192.168.123.104:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:26:34.950 INFO:teuthology.orchestra.run.vm04.stdout:setting min_mon_release = quincy 2026-03-09T18:26:34.950 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: set fsid to 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:34.950 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:26:34.950 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:26:34.950 INFO:teuthology.orchestra.run.vm04.stdout:Creating mon... 2026-03-09T18:26:35.101 INFO:teuthology.orchestra.run.vm04.stdout:create mon.a on 2026-03-09T18:26:35.256 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T18:26:35.387 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T18:26:35.538 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target → /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target. 2026-03-09T18:26:35.538 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target → /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target. 2026-03-09T18:26:35.700 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a 2026-03-09T18:26:35.700 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service: Unit ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service not loaded. 2026-03-09T18:26:35.837 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target.wants/ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service → /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e@.service. 2026-03-09T18:26:36.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:36 vm04 podman[51129]: 2026-03-09 18:26:36.112955244 +0000 UTC m=+0.166888340 container start 16afbfea12ad3c53f082315ca961033443a87af050868a78ea0b34c91cf29d77 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:26:36.403 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T18:26:36.403 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T18:26:36.403 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon to start... 2026-03-09T18:26:36.403 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon... 2026-03-09T18:26:36.481 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:36 vm04 bash[51129]: 16afbfea12ad3c53f082315ca961033443a87af050868a78ea0b34c91cf29d77 2026-03-09T18:26:36.481 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:36 vm04 systemd[1]: Started Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:26:36.481 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:36 vm04 ceph-mon[51143]: mkfs 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout id: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout services: 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.14244s) 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout data: 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T18:26:36.664 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T18:26:36.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T18:26:36.665 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.665 INFO:teuthology.orchestra.run.vm04.stdout:mon is available 2026-03-09T18:26:36.665 INFO:teuthology.orchestra.run.vm04.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T18:26:36.781 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:36 vm04 ceph-mon[51143]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T18:26:36.864 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T18:26:36.865 INFO:teuthology.orchestra.run.vm04.stdout:Generating new minimal ceph.conf... 2026-03-09T18:26:37.067 INFO:teuthology.orchestra.run.vm04.stdout:Restarting the monitor... 2026-03-09T18:26:37.382 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 systemd[1]: Stopping Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:26:37.382 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51139]: 2026-03-09T18:26:37.155+0000 7ff811bac640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:26:37.382 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51139]: 2026-03-09T18:26:37.155+0000 7ff811bac640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T18:26:37.618 INFO:teuthology.orchestra.run.vm04.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51344]: 2026-03-09 18:26:37.382487642 +0000 UTC m=+0.240589592 container died 16afbfea12ad3c53f082315ca961033443a87af050868a78ea0b34c91cf29d77 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51344]: 2026-03-09 18:26:37.39937399 +0000 UTC m=+0.257475940 container remove 16afbfea12ad3c53f082315ca961033443a87af050868a78ea0b34c91cf29d77 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 bash[51344]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service: Deactivated successfully. 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 systemd[1]: Stopped Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 systemd[1]: Starting Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51413]: 2026-03-09 18:26:37.566817187 +0000 UTC m=+0.019627979 container create 5a16b990a68cc0d763d75470910f85997a680d7c9892e3e2c73e5137df05e897 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51413]: 2026-03-09 18:26:37.604435089 +0000 UTC m=+0.057245891 container init 5a16b990a68cc0d763d75470910f85997a680d7c9892e3e2c73e5137df05e897 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_REF=squid) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51413]: 2026-03-09 18:26:37.608130436 +0000 UTC m=+0.060941228 container start 5a16b990a68cc0d763d75470910f85997a680d7c9892e3e2c73e5137df05e897 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 bash[51413]: 5a16b990a68cc0d763d75470910f85997a680d7c9892e3e2c73e5137df05e897 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 podman[51413]: 2026-03-09 18:26:37.5595959 +0000 UTC m=+0.012406703 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 systemd[1]: Started Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: pidfile_write: ignore empty --pid-file 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: load: jerasure load: lrc 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: RocksDB version: 7.9.2 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Git sha 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: DB SUMMARY 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: DB Session ID: SNGB55XXPANFSO6MTP16 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: CURRENT file: CURRENT 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75933 ; 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.error_if_exists: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.create_if_missing: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.paranoid_checks: 1 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.env: 0x563b5bcbfdc0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.info_log: 0x563b5cbc2700 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.statistics: (nil) 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.use_fsync: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_log_file_size: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:26:37.633 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_fallocate: 1 2026-03-09T18:26:37.819 INFO:teuthology.orchestra.run.vm04.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T18:26:37.820 INFO:teuthology.orchestra.run.vm04.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:26:37.820 INFO:teuthology.orchestra.run.vm04.stdout:Creating mgr... 2026-03-09T18:26:37.820 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T18:26:37.821 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T18:26:37.898 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:26:37.898 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:26:37.898 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.use_direct_reads: 0 2026-03-09T18:26:37.898 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.db_log_dir: 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.wal_dir: 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.write_buffer_manager: 0x563b5cbc7900 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.unordered_write: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.row_cache: None 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.wal_filter: None 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.two_write_queues: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.wal_compression: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.atomic_flush: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.log_readahead_size: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_background_jobs: 2 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_background_compactions: -1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_subcompactions: 1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_open_files: -1 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:26:37.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_background_flushes: -1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Compression algorithms supported: 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kZSTD supported: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kXpressCompression supported: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kBZip2Compression supported: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kLZ4Compression supported: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kZlibCompression supported: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: kSnappyCompression supported: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.merge_operator: 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_filter: None 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b5cbc2640) 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_top_level_index_and_filter: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: index_type: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_index_type: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: index_shortening: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: checksum: 4 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: no_block_cache: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache: 0x563b5cbe7350 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_name: BinnedLRUCache 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_options: 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: capacity : 536870912 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: num_shard_bits : 4 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: strict_capacity_limit : 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: high_pri_pool_ratio: 0.000 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_compressed: (nil) 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: persistent_cache: (nil) 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size: 4096 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size_deviation: 10 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_restart_interval: 16 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: index_block_restart_interval: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: metadata_block_size: 4096 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: partition_filters: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: use_delta_encoding: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: filter_policy: bloomfilter 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: whole_key_filtering: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: verify_compression: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: read_amp_bytes_per_bit: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: format_version: 5 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: enable_index_compression: 1 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: block_align: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: max_auto_readahead_size: 262144 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: prepopulate_block_cache: 0 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: initial_auto_readahead_size: 8192 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:26:37.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression: NoCompression 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.num_levels: 7 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.inplace_update_support: 0 2026-03-09T18:26:37.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.bloom_locality: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.max_successive_merges: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.ttl: 2592000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enable_blob_files: false 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.min_blob_size: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8d96c437-31c0-4021-8379-7736b9ff24c3 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080797641446, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080797643069, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 227, "table_properties": {"data_size": 71190, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9804, "raw_average_key_size": 49, "raw_value_size": 65683, "raw_average_value_size": 331, "num_data_blocks": 8, "num_entries": 198, "num_filter_entries": 198, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773080797, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8d96c437-31c0-4021-8379-7736b9ff24c3", "db_session_id": "SNGB55XXPANFSO6MTP16", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080797643136, "job": 1, "event": "recovery_finished"} 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563b5cbe8e00 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: DB pointer 0x563b5ccfe000 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: starting mon.a rank 0 at public addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] at bind addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: ** DB Stats ** 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: L0 2/0 73.06 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Sum 2/0 73.06 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 49.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.902 INFO:journalctl@ceph.mon.a.vm04.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative compaction: 0.00 GB write, 4.26 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval compaction: 0.00 GB write, 4.26 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache BinnedLRUCache@0x563b5cbe7350#2 capacity: 512.00 MB usage: 26.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???) e1 preinit fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).mds e1 new map 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).mds e1 print_map 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: e1 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: btime 2026-03-09T18:26:36:478221+0000 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: legacy client fscid: -1 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout: No filesystems configured 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).mgr e0 loading version 1 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T18:26:37.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:37 vm04 ceph-mon[51427]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T18:26:37.990 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y 2026-03-09T18:26:37.990 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service: Unit ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service not loaded. 2026-03-09T18:26:38.126 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e.target.wants/ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service → /etc/systemd/system/ceph-5769e1c8-1be5-11f1-a591-591820987f3e@.service. 2026-03-09T18:26:38.256 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 systemd[1]: Starting Ceph mgr.y for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr to start... 2026-03-09T18:26:38.309 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr... 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 podman[51626]: 2026-03-09 18:26:38.256812809 +0000 UTC m=+0.018816702 container create 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 podman[51626]: 2026-03-09 18:26:38.299229633 +0000 UTC m=+0.061233526 container init 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 podman[51626]: 2026-03-09 18:26:38.30221425 +0000 UTC m=+0.064218143 container start 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True) 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 bash[51626]: 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 podman[51626]: 2026-03-09 18:26:38.24855403 +0000 UTC m=+0.010557923 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 systemd[1]: Started Ceph mgr.y for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:38.425+0000 7f05ddb3e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:26:38.526 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:38.471+0000 7f05ddb3e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "5769e1c8-1be5-11f1-a591-591820987f3e", 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:26:38.699 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:26:38.700 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:26:36:478221+0000", 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:26:36.478913+0000", 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:38.701 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (1/15)... 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: monmap epoch 1 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: last_changed 2026-03-09T18:26:34.930477+0000 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: min_mon_release 19 (squid) 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: election_strategy: 1 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: fsmap 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: mgrmap e1: no daemons active 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4158963032' entity='client.admin' 2026-03-09T18:26:38.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1758902856' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:26:39.216 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:38.899+0000 7f05ddb3e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:39.246+0000 7f05ddb3e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: from numpy import show_config as show_numpy_config 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:39.340+0000 7f05ddb3e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:39.379+0000 7f05ddb3e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:26:39.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:39.453+0000 7f05ddb3e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:26:40.277 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:39.987+0000 7f05ddb3e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:26:40.277 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.105+0000 7f05ddb3e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:40.277 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.150+0000 7f05ddb3e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:26:40.277 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.192+0000 7f05ddb3e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:40.277 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.236+0000 7f05ddb3e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:26:40.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.276+0000 7f05ddb3e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:26:40.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.457+0000 7f05ddb3e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:26:40.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.509+0000 7f05ddb3e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "5769e1c8-1be5-11f1-a591-591820987f3e", 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:40.925 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:26:36:478221+0000", 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:26:36.478913+0000", 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:40.926 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (2/15)... 2026-03-09T18:26:41.080 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:40.755+0000 7f05ddb3e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:26:41.080 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.079+0000 7f05ddb3e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:26:41.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:40 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2617039479' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:26:41.380 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.122+0000 7f05ddb3e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:26:41.380 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.165+0000 7f05ddb3e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:26:41.380 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.249+0000 7f05ddb3e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:26:41.380 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.288+0000 7f05ddb3e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:26:41.380 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.379+0000 7f05ddb3e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:26:41.657 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.506+0000 7f05ddb3e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:41.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.656+0000 7f05ddb3e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:26:41.995 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:41 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:41.697+0000 7f05ddb3e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: Activating manager daemon y 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: mgrmap e2: y(active, starting, since 0.104015s) 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: Manager daemon y is now available 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:26:42.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:42 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:43.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:43 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:26:43.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:43 vm04 ceph-mon[51427]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:26:43.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:43 vm04 ceph-mon[51427]: mgrmap e3: y(active, since 1.10859s) 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "5769e1c8-1be5-11f1-a591-591820987f3e", 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:43.245 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:26:36:478221+0000", 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:26:43.246 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:26:36.478913+0000", 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:43.247 INFO:teuthology.orchestra.run.vm04.stdout:mgr is available 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T18:26:43.882 INFO:teuthology.orchestra.run.vm04.stdout:Enabling cephadm module... 2026-03-09T18:26:44.024 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:44 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2888362470' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:26:44.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:44 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1108840609' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:45 vm04 ceph-mon[51427]: mgrmap e4: y(active, since 2s) 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:45 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3641037577' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:45 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setuser ceph since I am not root 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:45 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setgroup ceph since I am not root 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:45 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:45.213+0000 7efe8a0cc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:26:45.379 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:45 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:45.287+0000 7efe8a0cc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-09T18:26:45.436 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 5... 2026-03-09T18:26:46.055 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:45 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:45.788+0000 7efe8a0cc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:26:46.377 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:46.153+0000 7efe8a0cc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:26:46.377 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:26:46.377 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:26:46.377 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: from numpy import show_config as show_numpy_config 2026-03-09T18:26:46.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:46.247+0000 7efe8a0cc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:26:46.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:46.295+0000 7efe8a0cc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:26:46.378 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:46 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3641037577' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:26:46.378 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:46 vm04 ceph-mon[51427]: mgrmap e5: y(active, since 3s) 2026-03-09T18:26:46.378 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:46 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1301508039' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:26:46.716 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:46.376+0000 7efe8a0cc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:26:47.194 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:46 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:46.930+0000 7efe8a0cc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:26:47.195 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.058+0000 7efe8a0cc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:47.195 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.103+0000 7efe8a0cc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:26:47.195 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.145+0000 7efe8a0cc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:47.466 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.194+0000 7efe8a0cc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:26:47.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.238+0000 7efe8a0cc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:26:47.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.438+0000 7efe8a0cc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:26:47.756 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.498+0000 7efe8a0cc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:48.069 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:47.755+0000 7efe8a0cc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:26:48.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.068+0000 7efe8a0cc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:26:48.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.108+0000 7efe8a0cc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:26:48.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.154+0000 7efe8a0cc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:26:48.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.240+0000 7efe8a0cc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:26:48.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.282+0000 7efe8a0cc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:26:48.649 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.373+0000 7efe8a0cc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:26:48.649 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.497+0000 7efe8a0cc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.648+0000 7efe8a0cc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:48.689+0000 7efe8a0cc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: Active manager daemon y restarted 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: Activating manager daemon y 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: mgrmap e6: y(active, starting, since 0.0045732s) 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:26:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: Manager daemon y is now available 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:48.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:48 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 5 is available 2026-03-09T18:26:49.745 INFO:teuthology.orchestra.run.vm04.stdout:Setting orchestrator backend to cephadm... 2026-03-09T18:26:50.336 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:49 vm04 ceph-mon[51427]: Found migration_current of "None". Setting to last migration. 2026-03-09T18:26:50.337 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:49 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:50.337 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:49 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:50.337 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:49 vm04 ceph-mon[51427]: mgrmap e7: y(active, since 1.00731s) 2026-03-09T18:26:50.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T18:26:50.387 INFO:teuthology.orchestra.run.vm04.stdout:Generating ssh key... 2026-03-09T18:26:50.937 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Generating public/private ed25519 key pair. 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Your identification has been saved in /tmp/tmpz6s118dx/key 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Your public key has been saved in /tmp/tmpz6s118dx/key.pub 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: The key fingerprint is: 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: SHA256:nU0TsURxACKxFDSPMOZA85n/hp8ysWbZYRORBg3YZFs ceph-5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: The key's randomart image is: 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: +--[ED25519 256]--+ 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | .+ +=%*E..oB+. | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | *.B.O=. . + | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | = +... + | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | . .. + . | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | . S.o . | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | .o+ | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | .*oo | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | Bo.. | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: | o oo | 2026-03-09T18:26:50.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:50 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: +----[SHA256]-----+ 2026-03-09T18:26:50.988 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBj7bfqB0UazOY2o6+8INmX0JDIlMs+gu5zXq/2HLvr ceph-5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:26:50.989 INFO:teuthology.orchestra.run.vm04.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:26:50.989 INFO:teuthology.orchestra.run.vm04.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T18:26:50.989 INFO:teuthology.orchestra.run.vm04.stdout:Adding host vm04... 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:49] ENGINE Bus STARTING 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:50] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:51.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:51 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:50] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:50] ENGINE Bus STARTED 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:50] ENGINE Client ('192.168.123.104', 52936) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: Generating ssh key... 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:52.066 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:52 vm04 ceph-mon[51427]: mgrmap e8: y(active, since 2s) 2026-03-09T18:26:52.914 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Added host 'vm04' with addr '192.168.123.104' 2026-03-09T18:26:52.914 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mon service... 2026-03-09T18:26:53.188 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:53 vm04 ceph-mon[51427]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:53.188 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:53 vm04 ceph-mon[51427]: Deploying cephadm binary to vm04 2026-03-09T18:26:53.188 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:53 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:53.188 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:53 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:53.217 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T18:26:53.218 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mgr service... 2026-03-09T18:26:53.519 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T18:26:54.087 INFO:teuthology.orchestra.run.vm04.stdout:Enabling the dashboard module... 2026-03-09T18:26:54.320 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:54 vm04 ceph-mon[51427]: Added host vm04 2026-03-09T18:26:54.320 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:54 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:54.320 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:54 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:54.320 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:54 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/372788233' entity='client.admin' 2026-03-09T18:26:54.320 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:54 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3780005662' entity='client.admin' 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: Saving service mon spec with placement count:5 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: Saving service mgr spec with placement count:2 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1979316085' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:55 vm04 ceph-mon[51427]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setuser ceph since I am not root 2026-03-09T18:26:55.246 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setgroup ceph since I am not root 2026-03-09T18:26:55.477 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-09T18:26:55.478 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 9... 2026-03-09T18:26:55.525 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:55.298+0000 7f1ecd825140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:26:55.525 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:55.343+0000 7f1ecd825140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:26:56.085 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:55.813+0000 7f1ecd825140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:26:56.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:56 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1979316085' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:26:56.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:56 vm04 ceph-mon[51427]: mgrmap e9: y(active, since 6s) 2026-03-09T18:26:56.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:56 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3117394488' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:26:56.346 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.135+0000 7f1ecd825140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:26:56.347 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:26:56.347 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:26:56.347 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: from numpy import show_config as show_numpy_config 2026-03-09T18:26:56.347 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.228+0000 7f1ecd825140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:26:56.347 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.270+0000 7f1ecd825140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:26:56.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.345+0000 7f1ecd825140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:26:57.150 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.878+0000 7f1ecd825140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:26:57.151 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:56 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:56.990+0000 7f1ecd825140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:57.151 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.031+0000 7f1ecd825140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:26:57.151 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.067+0000 7f1ecd825140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:57.151 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.111+0000 7f1ecd825140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:26:57.151 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.149+0000 7f1ecd825140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:26:57.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.323+0000 7f1ecd825140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:26:57.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.376+0000 7f1ecd825140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:26:57.901 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.610+0000 7f1ecd825140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.900+0000 7f1ecd825140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.939+0000 7f1ecd825140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:57.983+0000 7f1ecd825140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.063+0000 7f1ecd825140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.101+0000 7f1ecd825140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:26:58.189 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.188+0000 7f1ecd825140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:26:58.449 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.306+0000 7f1ecd825140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:26:58.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.448+0000 7f1ecd825140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:26:58.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:26:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:26:58.487+0000 7f1ecd825140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:26:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:58 vm04 ceph-mon[51427]: Active manager daemon y restarted 2026-03-09T18:26:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:58 vm04 ceph-mon[51427]: Activating manager daemon y 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 9 is available 2026-03-09T18:26:59.755 INFO:teuthology.orchestra.run.vm04.stdout:Generating a dashboard self-signed certificate... 2026-03-09T18:26:59.948 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:26:59.948 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: mgrmap e10: y(active, starting, since 0.166442s) 2026-03-09T18:26:59.948 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:26:59.948 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: Manager daemon y is now available 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:26:59.949 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:26:59 vm04 ceph-mon[51427]: mgrmap e11: y(active, since 1.13034s) 2026-03-09T18:27:00.684 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T18:27:00.684 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial admin user... 2026-03-09T18:27:01.123 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$AS6OHxaeuOtEEDyiZnL31uo37exPpgRtVBDURRuQ/4K0KVrorcb8W", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773080821, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T18:27:01.123 INFO:teuthology.orchestra.run.vm04.stdout:Fetching dashboard port number... 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: [09/Mar/2026:18:26:59] ENGINE Bus STARTING 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: [09/Mar/2026:18:27:00] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: [09/Mar/2026:18:27:00] ENGINE Client ('192.168.123.104', 59040) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: [09/Mar/2026:18:27:00] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: [09/Mar/2026:18:27:00] ENGINE Bus STARTED 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:01.367 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:01.422 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T18:27:01.422 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-09T18:27:01.422 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout:Ceph Dashboard is now available at: 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout: URL: https://vm04.local:8443/ 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout: User: admin 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout: Password: 47lpvbegth 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.424 INFO:teuthology.orchestra.run.vm04.stdout:Saving cluster configuration to /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config directory 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: sudo /sbin/cephadm shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: sudo /sbin/cephadm shell 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: ceph telemetry on 2026-03-09T18:27:01.724 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.725 INFO:teuthology.orchestra.run.vm04.stdout:For more information see: 2026-03-09T18:27:01.725 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.725 INFO:teuthology.orchestra.run.vm04.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T18:27:01.725 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:01.725 INFO:teuthology.orchestra.run.vm04.stdout:Bootstrap complete. 2026-03-09T18:27:01.761 INFO:tasks.cephadm:Fetching config... 2026-03-09T18:27:01.761 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:01.761 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T18:27:01.783 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T18:27:01.783 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:01.783 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T18:27:01.865 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T18:27:01.865 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:01.865 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/keyring of=/dev/stdout 2026-03-09T18:27:01.935 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T18:27:01.935 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:01.935 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T18:27:01.997 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T18:27:01.997 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBj7bfqB0UazOY2o6+8INmX0JDIlMs+gu5zXq/2HLvr ceph-5769e1c8-1be5-11f1-a591-591820987f3e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T18:27:02.086 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBj7bfqB0UazOY2o6+8INmX0JDIlMs+gu5zXq/2HLvr ceph-5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:02.114 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBj7bfqB0UazOY2o6+8INmX0JDIlMs+gu5zXq/2HLvr ceph-5769e1c8-1be5-11f1-a591-591820987f3e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T18:27:02.150 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBj7bfqB0UazOY2o6+8INmX0JDIlMs+gu5zXq/2HLvr ceph-5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:02.160 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T18:27:02.354 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:02.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:02 vm04 ceph-mon[51427]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:02.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:02 vm04 ceph-mon[51427]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:02.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:02 vm04 ceph-mon[51427]: mgrmap e12: y(active, since 2s) 2026-03-09T18:27:02.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:02 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/318225902' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:27:02.395 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:02 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2261911054' entity='client.admin' 2026-03-09T18:27:02.703 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T18:27:02.704 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T18:27:02.936 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:03.247 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-09T18:27:03.247 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:27:03.247 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-09T18:27:03.263 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:27:03.264 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:03.323 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-09T18:27:03.323 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch host add vm09 2026-03-09T18:27:03.541 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2867396109' entity='client.admin' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:03.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:03 vm04 ceph-mon[51427]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:05.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:05.694 INFO:teuthology.orchestra.run.vm04.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-09T18:27:05.761 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch host ls --format=json 2026-03-09T18:27:05.944 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:06.189 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:06.189 INFO:teuthology.orchestra.run.vm04.stdout:[{"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-09T18:27:06.228 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:06 vm04 ceph-mon[51427]: Deploying cephadm binary to vm09 2026-03-09T18:27:06.228 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:06 vm04 ceph-mon[51427]: mgrmap e13: y(active, since 6s) 2026-03-09T18:27:06.228 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:06.229 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:06.229 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:06.270 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T18:27:06.270 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd crush tunables default 2026-03-09T18:27:06.451 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:07.100 INFO:teuthology.orchestra.run.vm04.stderr:adjusted tunables profile to default 2026-03-09T18:27:07.142 INFO:tasks.cephadm:Adding mon.a on vm04 2026-03-09T18:27:07.142 INFO:tasks.cephadm:Adding mon.c on vm04 2026-03-09T18:27:07.142 INFO:tasks.cephadm:Adding mon.b on vm09 2026-03-09T18:27:07.142 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply mon '3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm09:192.168.123.109=b' 2026-03-09T18:27:07.385 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T18:27:07.431 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T18:27:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:07 vm04 ceph-mon[51427]: Added host vm09 2026-03-09T18:27:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:07 vm04 ceph-mon[51427]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:27:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:07 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2148983890' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:27:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:07.728 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-09T18:27:07.820 DEBUG:teuthology.orchestra.run.vm04:mon.c> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.c.service 2026-03-09T18:27:07.822 DEBUG:teuthology.orchestra.run.vm09:mon.b> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.b.service 2026-03-09T18:27:07.824 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:27:07.824 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mon dump -f json 2026-03-09T18:27:08.067 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T18:27:08.101 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2148983890' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:27:08.101 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:08 vm04 ceph-mon[51427]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:08.101 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:08.126 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T18:27:08.447 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:27:08.447 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","modified":"2026-03-09T18:26:34.930477Z","created":"2026-03-09T18:26:34.930477Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:27:08.447 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/2976322287' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:09.514 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:27:09.514 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mon dump -f json 2026-03-09T18:27:09.854 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:27:10.327 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:27:10.327 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","modified":"2026-03-09T18:26:34.930477Z","created":"2026-03-09T18:26:34.930477Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:27:10.327 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T18:27:10.466 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:10 vm04 ceph-mon[51427]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:27:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:10 vm04 ceph-mon[51427]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:27:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:10 vm04 ceph-mon[51427]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:10 vm04 ceph-mon[51427]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:27:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:10 vm04 ceph-mon[51427]: Deploying daemon mon.b on vm09 2026-03-09T18:27:11.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:10 vm09 ceph-mon[54744]: mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T18:27:11.464 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:27:11.464 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mon dump -f json 2026-03-09T18:27:11.656 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: mon.a calling monitor election 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: mon.b calling monitor election 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: monmap epoch 2 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: last_changed 2026-03-09T18:27:10.726761+0000 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: min_mon_release 19 (squid) 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: election_strategy: 1 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: fsmap 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: mgrmap e13: y(active, since 17s) 2026-03-09T18:27:16.062 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:15 vm04 ceph-mon[51427]: overall HEALTH_OK 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: mon.a calling monitor election 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: mon.b calling monitor election 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: monmap epoch 2 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: last_changed 2026-03-09T18:27:10.726761+0000 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: min_mon_release 19 (squid) 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: election_strategy: 1 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: fsmap 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: mgrmap e13: y(active, since 17s) 2026-03-09T18:27:16.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:15 vm09 ceph-mon[54744]: overall HEALTH_OK 2026-03-09T18:27:16.358 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:27:16.358 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","modified":"2026-03-09T18:27:10.726761Z","created":"2026-03-09T18:26:34.930477Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T18:27:16.358 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-09T18:27:16.627 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 systemd[1]: Starting Ceph mon.c for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 podman[57567]: 2026-03-09 18:27:16.626899261 +0000 UTC m=+0.018538602 container create 2c86a2818ffe359a62a642b38927fff215784f6b55e16f2b3fe69cc4e87527fb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 podman[57567]: 2026-03-09 18:27:16.667643945 +0000 UTC m=+0.059283297 container init 2c86a2818ffe359a62a642b38927fff215784f6b55e16f2b3fe69cc4e87527fb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-c, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 podman[57567]: 2026-03-09 18:27:16.671275583 +0000 UTC m=+0.062914924 container start 2c86a2818ffe359a62a642b38927fff215784f6b55e16f2b3fe69cc4e87527fb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-c, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 bash[57567]: 2c86a2818ffe359a62a642b38927fff215784f6b55e16f2b3fe69cc4e87527fb 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 podman[57567]: 2026-03-09 18:27:16.619416515 +0000 UTC m=+0.011055866 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 systemd[1]: Started Ceph mon.c for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: pidfile_write: ignore empty --pid-file 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: load: jerasure load: lrc 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: RocksDB version: 7.9.2 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Git sha 0 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:27:16.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: DB SUMMARY 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: DB Session ID: IW4FSUU7A4J55DUF7CMF 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: CURRENT file: CURRENT 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 636 ; 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.error_if_exists: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.create_if_missing: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.paranoid_checks: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.env: 0x5594e4ed8dc0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.info_log: 0x5594e6f7a700 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.statistics: (nil) 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.use_fsync: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_log_file_size: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_fallocate: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.use_direct_reads: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.db_log_dir: 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.wal_dir: 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.write_buffer_manager: 0x5594e6f7f900 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.unordered_write: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.row_cache: None 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.wal_filter: None 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.two_write_queues: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.wal_compression: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.atomic_flush: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:27:16.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.log_readahead_size: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_background_jobs: 2 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_background_compactions: -1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_subcompactions: 1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_open_files: -1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_background_flushes: -1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Compression algorithms supported: 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kZSTD supported: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kXpressCompression supported: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kBZip2Compression supported: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kLZ4Compression supported: 1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kZlibCompression supported: 1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: kSnappyCompression supported: 1 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:27:16.969 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.merge_operator: 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_filter: None 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594e6f7a640) 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: cache_index_and_filter_blocks: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: pin_top_level_index_and_filter: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: index_type: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: data_block_index_type: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: index_shortening: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: checksum: 4 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: no_block_cache: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_cache: 0x5594e6f9f350 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_cache_name: BinnedLRUCache 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_cache_options: 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: capacity : 536870912 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: num_shard_bits : 4 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: strict_capacity_limit : 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: high_pri_pool_ratio: 0.000 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_cache_compressed: (nil) 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: persistent_cache: (nil) 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_size: 4096 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_size_deviation: 10 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_restart_interval: 16 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: index_block_restart_interval: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: metadata_block_size: 4096 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: partition_filters: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: use_delta_encoding: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: filter_policy: bloomfilter 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: whole_key_filtering: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: verify_compression: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: read_amp_bytes_per_bit: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: format_version: 5 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: enable_index_compression: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: block_align: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: max_auto_readahead_size: 262144 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: prepopulate_block_cache: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: initial_auto_readahead_size: 8192 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression: NoCompression 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.num_levels: 7 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:27:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.inplace_update_support: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.bloom_locality: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.max_successive_merges: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.ttl: 2592000 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enable_blob_files: false 2026-03-09T18:27:16.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.min_blob_size: 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9cd0e0be-b177-4313-8674-3f237ea7d8ec 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080836702527, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080836703244, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773080836, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9cd0e0be-b177-4313-8674-3f237ea7d8ec", "db_session_id": "IW4FSUU7A4J55DUF7CMF", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773080836703401, "job": 1, "event": "recovery_finished"} 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5594e6fa0e00 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: DB pointer 0x5594e70b6000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: ** DB Stats ** 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: L0 1/0 1.73 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Sum 1/0 1.73 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: ** Compaction Stats [default] ** 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Cumulative compaction: 0.00 GB write, 0.20 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Interval compaction: 0.00 GB write, 0.20 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Block cache BinnedLRUCache@0x5594e6f9f350#2 capacity: 512.00 MB usage: 0.98 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: Block cache entry stats(count,size,portion): DataBlock(1,0.77 KB,0.000146031%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: using public_addrv [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: starting mon.c rank -1 at public addrs [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] at bind addrs [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.972 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(???) e0 preinit fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).mds e1 new map 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: e1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: btime 2026-03-09T18:26:36:478221+0000 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: legacy client fscid: -1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout: No filesystems configured 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mkfs 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: monmap epoch 1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: last_changed 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: min_mon_release 19 (squid) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: election_strategy: 1 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: fsmap 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e1: no daemons active 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4158963032' entity='client.admin' 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1758902856' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2617039479' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Activating manager daemon y 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e2: y(active, starting, since 0.104015s) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Manager daemon y is now available 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14100 192.168.123.104:0/432290636' entity='mgr.y' 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e3: y(active, since 1.10859s) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2888362470' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1108840609' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e4: y(active, since 2s) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3641037577' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3641037577' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e5: y(active, since 3s) 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1301508039' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Active manager daemon y restarted 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Activating manager daemon y 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:27:16.973 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e6: y(active, starting, since 0.0045732s) 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Manager daemon y is now available 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Found migration_current of "None". Setting to last migration. 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e7: y(active, since 1.00731s) 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:49] ENGINE Bus STARTING 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:50] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:50] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:50] ENGINE Bus STARTED 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:50] ENGINE Client ('192.168.123.104', 52936) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Generating ssh key... 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e8: y(active, since 2s) 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Deploying cephadm binary to vm04 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Added host vm04 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/372788233' entity='client.admin' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3780005662' entity='client.admin' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Saving service mon spec with placement count:5 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Saving service mgr spec with placement count:2 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1979316085' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14118 192.168.123.104:0/1106902082' entity='mgr.y' 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1979316085' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e9: y(active, since 6s) 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3117394488' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Active manager daemon y restarted 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Activating manager daemon y 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e10: y(active, starting, since 0.166442s) 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:27:16.974 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Manager daemon y is now available 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e11: y(active, since 1.13034s) 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:26:59] ENGINE Bus STARTING 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:27:00] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:27:00] ENGINE Client ('192.168.123.104', 59040) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:27:00] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: [09/Mar/2026:18:27:00] ENGINE Bus STARTED 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e12: y(active, since 2s) 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/318225902' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2261911054' entity='client.admin' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2867396109' entity='client.admin' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Deploying cephadm binary to vm09 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e13: y(active, since 6s) 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Added host vm09 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2148983890' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2148983890' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm09:192.168.123.109=b;count:3 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/2976322287' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:16.975 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: Deploying daemon mon.b on vm09 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.a calling monitor election 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.b calling monitor election 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: monmap epoch 2 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: last_changed 2026-03-09T18:27:10.726761+0000 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: min_mon_release 19 (squid) 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: election_strategy: 1 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: fsmap 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mgrmap e13: y(active, since 17s) 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: overall HEALTH_OK 2026-03-09T18:27:16.976 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:16 vm04 ceph-mon[57581]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T18:27:17.414 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:27:17.414 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mon dump -f json 2026-03-09T18:27:17.597 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:27:18.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:27:17 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:27:17.725+0000 7f1e99b8f640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T18:27:22.078 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: mon.a calling monitor election 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: mon.b calling monitor election 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: monmap epoch 3 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: min_mon_release 19 (squid) 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: election_strategy: 1 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: fsmap 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: mgrmap e13: y(active, since 23s) 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: overall HEALTH_OK 2026-03-09T18:27:22.079 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: mon.a calling monitor election 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: mon.b calling monitor election 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:22.087 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: monmap epoch 3 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: min_mon_release 19 (squid) 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: election_strategy: 1 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: fsmap 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: mgrmap e13: y(active, since 23s) 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: overall HEALTH_OK 2026-03-09T18:27:22.088 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:22.269 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:27:22.269 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":3,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","modified":"2026-03-09T18:27:16.753858Z","created":"2026-03-09T18:26:34.930477Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3301","nonce":0},{"type":"v1","addr":"192.168.123.104:6790","nonce":0}]},"addr":"192.168.123.104:6790/0","public_addr":"192.168.123.104:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T18:27:22.270 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 3 2026-03-09T18:27:22.329 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T18:27:22.329 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph config generate-minimal-conf 2026-03-09T18:27:22.550 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:22.818 INFO:teuthology.orchestra.run.vm04.stdout:# minimal ceph.conf for 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:22.818 INFO:teuthology.orchestra.run.vm04.stdout:[global] 2026-03-09T18:27:22.818 INFO:teuthology.orchestra.run.vm04.stdout: fsid = 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:22.819 INFO:teuthology.orchestra.run.vm04.stdout: mon_host = [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] 2026-03-09T18:27:22.905 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T18:27:22.906 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:22.906 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:27:22.952 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:22.952 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:23.035 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:27:23.035 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:27:23.064 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:27:23.064 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:27:23.139 INFO:tasks.cephadm:Adding mgr.y on vm04 2026-03-09T18:27:23.139 INFO:tasks.cephadm:Adding mgr.x on vm09 2026-03-09T18:27:23.139 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply mgr '2;vm04=y;vm09=x' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/1310295102' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2502061596' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/1310295102' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2502061596' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:23 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.378 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:27:23.689 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-09T18:27:23.779 DEBUG:teuthology.orchestra.run.vm09:mgr.x> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.x.service 2026-03-09T18:27:23.781 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T18:27:23.781 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:27:23.781 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:27:23.796 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:27:23.796 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-09T18:27:23.856 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-09T18:27:23.856 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-09T18:27:23.856 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-09T18:27:23.856 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-09T18:27:23.856 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-09T18:27:23.856 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:27:23.856 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:27:23.856 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 18:27:02.744949001 +0000 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 18:24:00.735007219 +0000 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 18:24:00.735007219 +0000 2026-03-09T18:27:23.913 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 18:20:26.245000000 +0000 2026-03-09T18:27:23.913 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: mon.a calling monitor election 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: mon.b calling monitor election 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: monmap epoch 3 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: min_mon_release 19 (squid) 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: election_strategy: 1 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: fsmap 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: mgrmap e13: y(active, since 23s) 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: overall HEALTH_OK 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/1310295102' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2502061596' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:27:23.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:23 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:23.976 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T18:27:23.976 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T18:27:23.976 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.00013308 s, 3.8 MB/s 2026-03-09T18:27:23.977 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:27:24.036 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 250 Links: 1 Device type: fc,20 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 18:27:02.774949033 +0000 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 18:24:00.737007221 +0000 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 18:24:00.737007221 +0000 2026-03-09T18:27:24.092 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 18:20:26.287000000 +0000 2026-03-09T18:27:24.093 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:27:24.155 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T18:27:24.155 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T18:27:24.155 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000130415 s, 3.9 MB/s 2026-03-09T18:27:24.156 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:27:24.213 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 18:27:02.804949065 +0000 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 18:24:00.739007223 +0000 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 18:24:00.739007223 +0000 2026-03-09T18:27:24.271 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 18:20:26.292000000 +0000 2026-03-09T18:27:24.271 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:27:24.337 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T18:27:24.338 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T18:27:24.338 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000136155 s, 3.8 MB/s 2026-03-09T18:27:24.338 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:27:24.399 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 18:27:02.840949103 +0000 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 18:24:00.751007237 +0000 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 18:24:00.751007237 +0000 2026-03-09T18:27:24.456 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 18:20:26.294000000 +0000 2026-03-09T18:27:24.456 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:27:24.524 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T18:27:24.524 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T18:27:24.524 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000144831 s, 3.5 MB/s 2026-03-09T18:27:24.525 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:27:24.581 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:27:24.581 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:27:24.601 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:24 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:24.599+0000 7f4fed366140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:27:24.613 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:27:24.613 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-09T18:27:24.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mon.c calling monitor election 2026-03-09T18:27:24.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: Reconfiguring mon.b (monmap changed)... 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: Reconfiguring daemon mon.b on vm09 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: Saving service mgr spec with placement vm04=y;vm09=x;count:2 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: Deploying daemon mgr.x on vm09 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mon.c calling monitor election 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mon.b calling monitor election 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mon.a calling monitor election 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mon.c calling monitor election 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: Reconfiguring mon.b (monmap changed)... 2026-03-09T18:27:24.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: Reconfiguring daemon mon.b on vm09 2026-03-09T18:27:24.749 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-09T18:27:24.749 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-09T18:27:24.749 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-09T18:27:24.749 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-09T18:27:24.749 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-09T18:27:24.749 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:27:24.749 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:27:24.750 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:27:08.255838631 +0000 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:24:00.415629471 +0000 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:24:00.415629471 +0000 2026-03-09T18:27:24.769 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 18:21:03.293000000 +0000 2026-03-09T18:27:24.770 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mon.c calling monitor election 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: Reconfiguring mon.b (monmap changed)... 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: Reconfiguring daemon mon.b on vm09 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: Saving service mgr spec with placement vm04=y;vm09=x;count:2 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: Deploying daemon mgr.x on vm09 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mon.c calling monitor election 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mon.b calling monitor election 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mon.a calling monitor election 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: monmap epoch 3 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:24.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: min_mon_release 19 (squid) 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: election_strategy: 1 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: fsmap 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: mgrmap e13: y(active, since 25s) 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: overall HEALTH_OK 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:24 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:24.859 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:24 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:24.649+0000 7f4fed366140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:27:24.918 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:27:24.918 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:27:24.918 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.007323 s, 69.9 kB/s 2026-03-09T18:27:24.920 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: monmap epoch 3 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: min_mon_release 19 (squid) 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: election_strategy: 1 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: fsmap 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: mgrmap e13: y(active, since 25s) 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: overall HEALTH_OK 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm09=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: Saving service mgr spec with placement vm04=y;vm09=x;count:2 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: Deploying daemon mgr.x on vm09 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mon.c calling monitor election 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mon.b calling monitor election 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mon.a calling monitor election 2026-03-09T18:27:24.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: monmap epoch 3 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: fsid 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: last_changed 2026-03-09T18:27:16.753858+0000 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: created 2026-03-09T18:26:34.930477+0000 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: min_mon_release 19 (squid) 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: election_strategy: 1 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: fsmap 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: mgrmap e13: y(active, since 25s) 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: overall HEALTH_OK 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:24.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:24 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:25.027 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:27:08.305838824 +0000 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:24:00.403629452 +0000 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:24:00.403629452 +0000 2026-03-09T18:27:25.062 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 18:21:03.297000000 +0000 2026-03-09T18:27:25.063 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:27:25.144 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:27:25.145 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:27:25.145 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000195776 s, 2.6 MB/s 2026-03-09T18:27:25.146 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:27:25.234 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:27:08.335838939 +0000 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:24:00.427629489 +0000 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:24:00.427629489 +0000 2026-03-09T18:27:25.294 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 18:21:03.310000000 +0000 2026-03-09T18:27:25.294 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:27:25.363 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:25.136+0000 7f4fed366140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:27:25.369 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:27:25.369 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:27:25.369 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000225011 s, 2.3 MB/s 2026-03-09T18:27:25.370 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:27:25.433 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:27:08.366839058 +0000 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:24:00.409629461 +0000 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:24:00.409629461 +0000 2026-03-09T18:27:25.494 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 18:21:03.321000000 +0000 2026-03-09T18:27:25.494 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:27:25.563 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:27:25.563 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:27:25.563 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000190767 s, 2.7 MB/s 2026-03-09T18:27:25.564 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:27:25.629 INFO:tasks.cephadm:Deploying osd.0 on vm04 with /dev/vde... 2026-03-09T18:27:25.629 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vde 2026-03-09T18:27:25.630 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:25.525+0000 7f4fed366140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:25.753 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:25.754 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:25.832 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:26.012 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:27:25 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:27:25.751+0000 7f1e99b8f640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:27:26.108 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:27:26.109 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: from numpy import show_config as show_numpy_config 2026-03-09T18:27:26.109 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:25.635+0000 7f4fed366140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:27:26.109 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:25.681+0000 7f4fed366140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:27:26.109 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:25.770+0000 7f4fed366140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.319+0000 7f4fed366140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.438+0000 7f4fed366140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.482+0000 7f4fed366140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.518+0000 7f4fed366140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.561+0000 7f4fed366140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:27:26.603 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.601+0000 7f4fed366140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:27:26.611 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:26.628 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm04:/dev/vde 2026-03-09T18:27:26.823 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:26.851 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:26 vm04 ceph-mon[51427]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:27:26.851 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:26 vm04 ceph-mon[51427]: Reconfiguring daemon mgr.y on vm04 2026-03-09T18:27:26.851 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:26 vm04 ceph-mon[57581]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:27:26.851 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:26 vm04 ceph-mon[57581]: Reconfiguring daemon mgr.y on vm04 2026-03-09T18:27:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:26 vm09 ceph-mon[54744]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:27:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:26 vm09 ceph-mon[54744]: Reconfiguring daemon mgr.y on vm04 2026-03-09T18:27:26.858 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.786+0000 7f4fed366140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:27:26.858 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:26.842+0000 7f4fed366140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:27:27.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.079+0000 7f4fed366140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.383+0000 7f4fed366140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.428+0000 7f4fed366140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.473+0000 7f4fed366140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.560+0000 7f4fed366140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.600+0000 7f4fed366140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:27:27.696 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.694+0000 7f4fed366140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.823+0000 7f4fed366140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:27:27.977 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:27.975+0000 7f4fed366140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:27.984 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:27.985 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:28.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:27:28 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:27:28.016+0000 7f4fed366140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]': finished 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: Standby manager daemon x started 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:29.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:28 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2651534870' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:29.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]': finished 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: Standby manager daemon x started 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2651534870' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/921863056' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "025c88ca-fa01-4cbd-9d6d-c54757ade897"}]': finished 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: Standby manager daemon x started 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/627614131' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:28 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2651534870' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:29 vm09 ceph-mon[54744]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:29 vm09 ceph-mon[54744]: mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:27:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:27:30.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[57581]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[57581]: mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:27:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:27:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[51427]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[51427]: mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:27:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:27:32.116 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:31 vm04 ceph-mon[51427]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:32.117 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:31 vm04 ceph-mon[57581]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:32.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:31 vm09 ceph-mon[54744]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[57581]: Deploying daemon osd.0 on vm04 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:33.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:33 vm04 ceph-mon[51427]: Deploying daemon osd.0 on vm04 2026-03-09T18:27:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:27:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:33 vm09 ceph-mon[54744]: Deploying daemon osd.0 on vm04 2026-03-09T18:27:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:34 vm09 ceph-mon[54744]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:34 vm04 ceph-mon[51427]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:34 vm04 ceph-mon[57581]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[51427]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:35 vm04 ceph-mon[57581]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:35 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:35 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:35 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:35 vm09 ceph-mon[54744]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:35.693 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 0 on host 'vm04' 2026-03-09T18:27:35.767 DEBUG:teuthology.orchestra.run.vm04:osd.0> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.0.service 2026-03-09T18:27:35.768 INFO:tasks.cephadm:Deploying osd.1 on vm04 with /dev/vdd... 2026-03-09T18:27:35.768 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdd 2026-03-09T18:27:36.090 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:36.113 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:27:35 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:27:35.909+0000 7f61b96ee740 -1 osd.0 0 log_to_monitors true 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[51427]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:36 vm04 ceph-mon[57581]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:36 vm09 ceph-mon[54744]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:27:37.832 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:37.852 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: Detected new or changed devices on vm04 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: Detected new or changed devices on vm04 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:37.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:37.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:37 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:37.968 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:27:37 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:27:37.702+0000 7f61b566f640 -1 osd.0 0 waiting for initial osdmap 2026-03-09T18:27:37.968 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:27:37 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:27:37.716+0000 7f61b1499640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:27:38.034 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: Detected new or changed devices on vm04 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:37 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:38.899 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:38.900 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: from='osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3629400061' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]': finished 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1513411110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3629400061' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]': finished 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1513411110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3629400061' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f62082e3-9d11-4672-a72c-53d7908dbcd4"}]': finished 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:40.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1513411110' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:41 vm09 ceph-mon[54744]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:41 vm04 ceph-mon[57581]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:41 vm04 ceph-mon[51427]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[51427]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[57581]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:27:43.785 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:43 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:43 vm09 ceph-mon[54744]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:43 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:27:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:43 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:45.024 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:44 vm04 ceph-mon[51427]: Deploying daemon osd.1 on vm04 2026-03-09T18:27:45.024 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:44 vm04 ceph-mon[57581]: Deploying daemon osd.1 on vm04 2026-03-09T18:27:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:44 vm09 ceph-mon[54744]: Deploying daemon osd.1 on vm04 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[51427]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[57581]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.000 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:45 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:45 vm09 ceph-mon[54744]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:45 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:45 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:45 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:46.721 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 1 on host 'vm04' 2026-03-09T18:27:46.785 DEBUG:teuthology.orchestra.run.vm04:osd.1> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.1.service 2026-03-09T18:27:46.787 INFO:tasks.cephadm:Deploying osd.2 on vm04 with /dev/vdc... 2026-03-09T18:27:46.787 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdc 2026-03-09T18:27:47.084 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:47.345 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.345 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.345 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.346 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:47 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.679 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:47 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:47.948 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:27:47 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:27:47.709+0000 7f43c37bb740 -1 osd.1 0 log_to_monitors true 2026-03-09T18:27:48.585 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:48.604 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: Detected new or changed devices on vm04 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: Detected new or changed devices on vm04 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:48.631 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.788 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: Detected new or changed devices on vm04 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='client.24140 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='client.24140 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:49 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:49.690 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:27:49 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:27:49.392+0000 7f43bf73c640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T18:27:49.690 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:27:49 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:27:49.402+0000 7f43bb566640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='client.24140 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:27:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:49 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2051362612' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]': finished 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1753045175' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2051362612' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]': finished 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:50 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1753045175' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:27:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:27:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2051362612' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]: dispatch 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9c64b919-8d93-49bb-84a4-7291defe1cb0"}]': finished 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:50.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:50 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1753045175' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[57581]: pgmap v28: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:27:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:51 vm04 ceph-mon[51427]: pgmap v28: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:51 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:27:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:51 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:27:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:51 vm09 ceph-mon[54744]: pgmap v28: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:52.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:52 vm04 ceph-mon[57581]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T18:27:52.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:52 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:52 vm04 ceph-mon[51427]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T18:27:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:52 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:52 vm09 ceph-mon[54744]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T18:27:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:52 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:53.689 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:53 vm04 ceph-mon[51427]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:53.689 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:53 vm04 ceph-mon[57581]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:53.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:53 vm09 ceph-mon[54744]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[51427]: Deploying daemon osd.2 on vm04 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:54.595 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:54 vm04 ceph-mon[57581]: Deploying daemon osd.2 on vm04 2026-03-09T18:27:54.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:54 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:27:54.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:54 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:54.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:54 vm09 ceph-mon[54744]: Deploying daemon osd.2 on vm04 2026-03-09T18:27:55.700 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:55 vm04 ceph-mon[51427]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:55.700 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:55 vm04 ceph-mon[57581]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:55.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:55 vm09 ceph-mon[54744]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:56.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:57.551 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 2 on host 'vm04' 2026-03-09T18:27:57.614 DEBUG:teuthology.orchestra.run.vm04:osd.2> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.2.service 2026-03-09T18:27:57.656 INFO:tasks.cephadm:Deploying osd.3 on vm04 with /dev/vdb... 2026-03-09T18:27:57.656 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdb 2026-03-09T18:27:57.937 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[51427]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:58 vm04 ceph-mon[57581]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:58.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:58 vm09 ceph-mon[54744]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:27:59.252 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:27:59.270 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm04:/dev/vdb 2026-03-09T18:27:59.439 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: Detected new or changed devices on vm04 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[51427]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: Detected new or changed devices on vm04 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:59.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:27:59 vm04 ceph-mon[57581]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: Detected new or changed devices on vm04 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:27:59.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:27:59 vm09 ceph-mon[54744]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[51427]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:00.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:00.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:00.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:00.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:00 vm04 ceph-mon[57581]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' 2026-03-09T18:28:00.667 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:00 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:28:00.444+0000 7f12ba62e640 -1 osd.2 0 waiting for initial osdmap 2026-03-09T18:28:00.667 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:00 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:28:00.454+0000 7f12b5c57640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:00 vm09 ceph-mon[54744]: from='osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581]' entity='osd.2' 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]: dispatch 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]': finished 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:01 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1990391998' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]': finished 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1990391998' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2284869319' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c3feb6a9-175f-4b52-934d-734e9f86504a"}]': finished 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:01 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1990391998' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[51427]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75036]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75036]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:02.876 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75036]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[57581]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75040]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75040]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:02.877 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75040]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75024]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75024]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75024]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75028]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75028]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75028]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75032]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75032]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:02.877 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:28:02 vm04 sudo[75032]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 ceph-mon[54744]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 sudo[56693]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 sudo[56693]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 sudo[56693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T18:28:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:02 vm09 sudo[56693]: pam_unix(sudo:session): session closed for user root 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[51427]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:03.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:03.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:03.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:03.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:03 vm04 ceph-mon[57581]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:28:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:03 vm09 ceph-mon[54744]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:28:04.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:04 vm04 ceph-mon[51427]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T18:28:04.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:04 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:04.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:04 vm04 ceph-mon[57581]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T18:28:04.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:04 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:05.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:04 vm09 ceph-mon[54744]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T18:28:05.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:04 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[51427]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[51427]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[51427]: Deploying daemon osd.3 on vm04 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[57581]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[57581]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:05.842 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:05 vm04 ceph-mon[57581]: Deploying daemon osd.3 on vm04 2026-03-09T18:28:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:05 vm09 ceph-mon[54744]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T18:28:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:05 vm09 ceph-mon[54744]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:28:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:05 vm09 ceph-mon[54744]: Deploying daemon osd.3 on vm04 2026-03-09T18:28:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:07 vm09 ceph-mon[54744]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:07 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:07 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:07 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[51427]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[57581]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:08.182 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.183 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:07 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.294 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 3 on host 'vm04' 2026-03-09T18:28:08.354 DEBUG:teuthology.orchestra.run.vm04:osd.3> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.3.service 2026-03-09T18:28:08.356 INFO:tasks.cephadm:Deploying osd.4 on vm09 with /dev/vde... 2026-03-09T18:28:08.356 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vde 2026-03-09T18:28:08.533 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:08.963 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:08 vm09 ceph-mon[54744]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:09.181 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[51427]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:09.184 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:08 vm04 ceph-mon[57581]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:28:09.388 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:28:09.405 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm09:/dev/vde 2026-03-09T18:28:09.578 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:10.138 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:09 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:10.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:09 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: Detected new or changed devices on vm04 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='client.24187 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/3515525706' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]': finished 2026-03-09T18:28:11.081 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T18:28:11.082 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.082 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:10 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:11.212 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: Detected new or changed devices on vm04 2026-03-09T18:28:11.212 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='client.24187 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:11.212 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:11.212 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/3515525706' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]': finished 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: Detected new or changed devices on vm04 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='client.24187 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/3515525706' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1342c95-9bc8-457d-bd07-044a344312a1"}]': finished 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:11.213 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:10 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:11.467 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:28:11 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3[76183]: 2026-03-09T18:28:11.210+0000 7f7e0653c640 -1 osd.3 0 waiting for initial osdmap 2026-03-09T18:28:11.467 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:28:11 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3[76183]: 2026-03-09T18:28:11.221+0000 7f7e01352640 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/1557026283' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: from='osd.3 ' entity='osd.3' 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/1557026283' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: from='osd.3 ' entity='osd.3' 2026-03-09T18:28:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/1557026283' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: from='osd.3 ' entity='osd.3' 2026-03-09T18:28:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:13 vm09 ceph-mon[54744]: osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853] boot 2026-03-09T18:28:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:13 vm09 ceph-mon[54744]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T18:28:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:13 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:13 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:13 vm09 ceph-mon[54744]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[57581]: osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853] boot 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[57581]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[57581]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[51427]: osd.3 [v2:192.168.123.104:6826/3227748853,v1:192.168.123.104:6827/3227748853] boot 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[51427]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:13 vm04 ceph-mon[51427]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:14.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:14 vm09 ceph-mon[54744]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T18:28:14.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:14 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:14.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:14 vm04 ceph-mon[57581]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T18:28:14.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:14 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:14 vm04 ceph-mon[51427]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T18:28:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:14 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:15.235 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:15 vm09 ceph-mon[54744]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:15.235 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:28:15.235 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:15 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:15.235 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:15 vm09 ceph-mon[54744]: Deploying daemon osd.4 on vm09 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[57581]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[57581]: Deploying daemon osd.4 on vm09 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[51427]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:15 vm04 ceph-mon[51427]: Deploying daemon osd.4 on vm09 2026-03-09T18:28:17.885 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 4 on host 'vm09' 2026-03-09T18:28:17.970 DEBUG:teuthology.orchestra.run.vm09:osd.4> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.4.service 2026-03-09T18:28:17.971 INFO:tasks.cephadm:Deploying osd.5 on vm09 with /dev/vdd... 2026-03-09T18:28:17.971 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdd 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:18.011 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:17 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:18.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:17 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:18.266 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:28:18 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:28:18.208+0000 7fe10f591740 -1 osd.4 0 log_to_monitors true 2026-03-09T18:28:18.314 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:19.028 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:18 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:19.029 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:18 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.029 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:18 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.029 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:18 vm09 ceph-mon[54744]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.029 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:18 vm09 ceph-mon[54744]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[57581]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[57581]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[51427]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:18 vm04 ceph-mon[51427]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:28:19.669 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:28:19.685 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-09T18:28:19.875 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: Detected new or changed devices on vm09 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: Unable to set osd_memory_target on vm09 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: Detected new or changed devices on vm09 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: Unable to set osd_memory_target on vm09 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:20.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:19 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.334 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:28:19 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:28:19.910+0000 7fe10b512640 -1 osd.4 0 waiting for initial osdmap 2026-03-09T18:28:20.334 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:28:19 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:28:19.916+0000 7fe10733c640 -1 osd.4 26 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: Detected new or changed devices on vm09 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: Unable to set osd_memory_target on vm09 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:20.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:19 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:28:21.002 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:20 vm09 ceph-mon[54744]: osdmap e27: 5 total, 5 up, 5 in 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[51427]: osdmap e27: 5 total, 5 up, 5 in 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:21.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:21.218 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:21.218 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:21.218 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:21.218 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:28:21.218 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:20 vm04 ceph-mon[57581]: osdmap e27: 5 total, 5 up, 5 in 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: pgmap v57: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/4064589365' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]': finished 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/2218618275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: pgmap v57: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/4064589365' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]': finished 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:22.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:21 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/2218618275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:22.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: pgmap v57: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/4064589365' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]: dispatch 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a0757438-7809-4314-b9dd-37b37818922c"}]': finished 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:22.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:21 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/2218618275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:23.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:22 vm04 ceph-mon[51427]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T18:28:23.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:22 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:23.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:22 vm04 ceph-mon[57581]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T18:28:23.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:22 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:22 vm09 ceph-mon[54744]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T18:28:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:22 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:24.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:23 vm04 ceph-mon[57581]: pgmap v61: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:24.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:23 vm04 ceph-mon[51427]: pgmap v61: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:24.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:23 vm09 ceph-mon[54744]: pgmap v61: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[57581]: pgmap v62: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[57581]: Deploying daemon osd.5 on vm09 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[51427]: pgmap v62: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:25 vm04 ceph-mon[51427]: Deploying daemon osd.5 on vm09 2026-03-09T18:28:26.225 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:25 vm09 ceph-mon[54744]: pgmap v62: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:26.225 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:28:26.225 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:25 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:26.225 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:25 vm09 ceph-mon[54744]: Deploying daemon osd.5 on vm09 2026-03-09T18:28:27.969 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:27 vm09 ceph-mon[54744]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T18:28:27.969 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:27.970 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:27.970 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:27 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[57581]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[51427]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:27 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:28.499 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 5 on host 'vm09' 2026-03-09T18:28:28.569 DEBUG:teuthology.orchestra.run.vm09:osd.5> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.5.service 2026-03-09T18:28:28.570 INFO:tasks.cephadm:Deploying osd.6 on vm09 with /dev/vdc... 2026-03-09T18:28:28.570 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdc 2026-03-09T18:28:28.854 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:29.106 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:29 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:29 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:29.729 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:28:29 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:28:29.514+0000 7fdda3bbf740 -1 osd.5 0 log_to_monitors true 2026-03-09T18:28:30.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-09T18:28:30.296 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:28:30.320 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:30.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:30 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:30.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:30 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:30.500 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: Detected new or changed devices on vm09 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: Unable to set osd_memory_target on vm09 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:31 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:31.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: Detected new or changed devices on vm09 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: Unable to set osd_memory_target on vm09 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: Detected new or changed devices on vm09 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: Unable to set osd_memory_target on vm09 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:31 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/2786758799' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]': finished 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: osdmap e32: 7 total, 5 up, 7 in 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.163 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:32 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/2786758799' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]': finished 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: osdmap e32: 7 total, 5 up, 7 in 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/2786758799' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "45c78567-75e6-4026-9257-685a9df3da40"}]': finished 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: osdmap e32: 7 total, 5 up, 7 in 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:32.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:32 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:32.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:28:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:28:32.377+0000 7fdd9fb40640 -1 osd.5 0 waiting for initial osdmap 2026-03-09T18:28:32.609 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:28:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:28:32.391+0000 7fdd9b169640 -1 osd.5 32 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/3353312657' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/3353312657' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T18:28:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:33 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/3353312657' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053]' entity='osd.5' 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T18:28:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:33 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[57581]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[57581]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[51427]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[51427]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T18:28:34.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:34 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:34.563 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:34 vm09 ceph-mon[54744]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:28:34.563 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:34 vm09 ceph-mon[54744]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T18:28:34.563 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:34 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:36.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:36 vm09 ceph-mon[54744]: pgmap v73: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:36.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:28:36.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:36 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[57581]: pgmap v73: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[51427]: pgmap v73: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:28:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:36 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:37.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:37 vm04 ceph-mon[57581]: Deploying daemon osd.6 on vm09 2026-03-09T18:28:37.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:37 vm04 ceph-mon[51427]: Deploying daemon osd.6 on vm09 2026-03-09T18:28:37.499 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:37 vm09 ceph-mon[54744]: Deploying daemon osd.6 on vm09 2026-03-09T18:28:38.243 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:38 vm09 ceph-mon[54744]: pgmap v74: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:38.243 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:38.244 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.244 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:38 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[51427]: pgmap v74: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[57581]: pgmap v74: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:38 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:38.857 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 6 on host 'vm09' 2026-03-09T18:28:38.917 DEBUG:teuthology.orchestra.run.vm09:osd.6> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.6.service 2026-03-09T18:28:38.959 INFO:tasks.cephadm:Deploying osd.7 on vm09 with /dev/vdb... 2026-03-09T18:28:38.959 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- lvm zap /dev/vdb 2026-03-09T18:28:39.260 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:39.437 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:39 vm09 ceph-mon[54744]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:39.438 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:28:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:28:39.377+0000 7fd84a7cf740 -1 osd.6 0 log_to_monitors true 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[57581]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:39 vm04 ceph-mon[51427]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:28:40.613 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:28:40.636 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch daemon add osd vm09:/dev/vdb 2026-03-09T18:28:40.813 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: Detected new or changed devices on vm09 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: Adjusting osd_memory_target on vm09 to 87737k 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: Unable to set osd_memory_target on vm09 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:40 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: Detected new or changed devices on vm09 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: Adjusting osd_memory_target on vm09 to 87737k 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: Unable to set osd_memory_target on vm09 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:41.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: Detected new or changed devices on vm09 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: Adjusting osd_memory_target on vm09 to 87737k 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: Unable to set osd_memory_target on vm09 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:41.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:40 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='osd.6 ' entity='osd.6' 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640] boot 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: osdmap e38: 7 total, 7 up, 7 in 2026-03-09T18:28:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:41 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.022 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:28:41 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:28:41.749+0000 7fd846750640 -1 osd.6 0 waiting for initial osdmap 2026-03-09T18:28:42.022 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:28:41 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:28:41.759+0000 7fd841d79640 -1 osd.6 37 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='osd.6 ' entity='osd.6' 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640] boot 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: osdmap e38: 7 total, 7 up, 7 in 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='osd.6 ' entity='osd.6' 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: osd.6 [v2:192.168.123.109:6816/434417640,v1:192.168.123.109:6817/434417640] boot 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: osdmap e38: 7 total, 7 up, 7 in 2026-03-09T18:28:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:41 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/3059549874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]': finished 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/756515629' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/3059549874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]': finished 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/756515629' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/3059549874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]: dispatch 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df6ed9a4-e641-43b0-965e-fef9ac178911"}]': finished 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:43.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/756515629' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:28:44.360 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:43 vm09 ceph-mon[54744]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 487 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:44.360 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:43 vm09 ceph-mon[54744]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T18:28:44.360 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:43 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[57581]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 487 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[57581]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[51427]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 487 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[51427]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T18:28:44.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:43 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:46.095 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:45 vm09 ceph-mon[54744]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:46.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:45 vm04 ceph-mon[57581]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:46.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:45 vm04 ceph-mon[51427]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:47.205 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:46 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:28:47.205 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:46 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:47.205 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:46 vm09 ceph-mon[54744]: Deploying daemon osd.7 on vm09 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[57581]: Deploying daemon osd.7 on vm09 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:47.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:46 vm04 ceph-mon[51427]: Deploying daemon osd.7 on vm09 2026-03-09T18:28:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:47 vm09 ceph-mon[54744]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:48.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:47 vm04 ceph-mon[57581]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:48.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:47 vm04 ceph-mon[51427]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 486 MiB used, 139 GiB / 140 GiB avail 2026-03-09T18:28:49.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:49.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:48 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:49.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:49.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:48 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:49.633 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 7 on host 'vm09' 2026-03-09T18:28:49.691 DEBUG:teuthology.orchestra.run.vm09:osd.7> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.7.service 2026-03-09T18:28:49.692 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T18:28:49.692 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd stat -f json 2026-03-09T18:28:49.859 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:28:49 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:28:49.790+0000 7f49bba72740 -1 osd.7 0 log_to_monitors true 2026-03-09T18:28:49.889 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:28:50.134 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:28:50.200 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:50.200 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.222 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2122005388' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:50.222 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":40,"num_osds":8,"num_up_osds":7,"osd_up_since":1773080921,"num_in_osds":8,"osd_in_since":1773080921,"num_remapped_pgs":0} 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.223 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:50 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2122005388' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:28:50.534 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:50 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2122005388' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:51.224 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd stat -f json 2026-03-09T18:28:51.414 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:28:51.659 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:28:51.716 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":42,"num_osds":8,"num_up_osds":7,"osd_up_since":1773080921,"num_in_osds":8,"osd_in_since":1773080921,"num_remapped_pgs":0} 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: Detected new or changed devices on vm09 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: Adjusting osd_memory_target on vm09 to 65803k 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: Unable to set osd_memory_target on vm09 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[57581]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: Detected new or changed devices on vm09 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: Adjusting osd_memory_target on vm09 to 65803k 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: Unable to set osd_memory_target on vm09 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T18:28:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:51.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:51 vm04 ceph-mon[51427]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:52.108 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:28:51 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:28:51.647+0000 7f49b79f3640 -1 osd.7 0 waiting for initial osdmap 2026-03-09T18:28:52.108 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:28:51 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:28:51.655+0000 7f49b381d640 -1 osd.7 42 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: Detected new or changed devices on vm09 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: Adjusting osd_memory_target on vm09 to 65803k 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: Unable to set osd_memory_target on vm09 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:28:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:51 vm09 ceph-mon[54744]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:28:52.717 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd stat -f json 2026-03-09T18:28:52.909 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:28:52.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[51427]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[51427]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3978017681' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[57581]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[57581]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:52.939 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:52 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3978017681' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:53.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:52 vm09 ceph-mon[54744]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:28:53.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:52 vm09 ceph-mon[54744]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T18:28:53.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:52 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:53.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:52 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:53.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:52 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3978017681' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:53.185 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:28:53.256 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":44,"num_osds":8,"num_up_osds":8,"osd_up_since":1773080932,"num_in_osds":8,"osd_in_since":1773080921,"num_remapped_pgs":0} 2026-03-09T18:28:53.256 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd dump --format=json 2026-03-09T18:28:53.439 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:28:53.675 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:28:53.675 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":44,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","created":"2026-03-09T18:26:36.478572+0000","modified":"2026-03-09T18:28:52.674901+0000","last_up_change":"2026-03-09T18:28:52.637078+0000","last_in_change":"2026-03-09T18:28:41.966037+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:28:00.728841+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"025c88ca-fa01-4cbd-9d6d-c54757ade897","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":43,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6803","nonce":1654539160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6805","nonce":1654539160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6809","nonce":1654539160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6807","nonce":1654539160}]},"public_addr":"192.168.123.104:6803/1654539160","cluster_addr":"192.168.123.104:6805/1654539160","heartbeat_back_addr":"192.168.123.104:6809/1654539160","heartbeat_front_addr":"192.168.123.104:6807/1654539160","state":["exists","up"]},{"osd":1,"uuid":"f62082e3-9d11-4672-a72c-53d7908dbcd4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6811","nonce":3519470547}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6813","nonce":3519470547}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6817","nonce":3519470547}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6815","nonce":3519470547}]},"public_addr":"192.168.123.104:6811/3519470547","cluster_addr":"192.168.123.104:6813/3519470547","heartbeat_back_addr":"192.168.123.104:6817/3519470547","heartbeat_front_addr":"192.168.123.104:6815/3519470547","state":["exists","up"]},{"osd":2,"uuid":"9c64b919-8d93-49bb-84a4-7291defe1cb0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6819","nonce":1080091581}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6821","nonce":1080091581}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6825","nonce":1080091581}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6823","nonce":1080091581}]},"public_addr":"192.168.123.104:6819/1080091581","cluster_addr":"192.168.123.104:6821/1080091581","heartbeat_back_addr":"192.168.123.104:6825/1080091581","heartbeat_front_addr":"192.168.123.104:6823/1080091581","state":["exists","up"]},{"osd":3,"uuid":"c3feb6a9-175f-4b52-934d-734e9f86504a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6826","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6827","nonce":3227748853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6828","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6829","nonce":3227748853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6832","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6833","nonce":3227748853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6830","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6831","nonce":3227748853}]},"public_addr":"192.168.123.104:6827/3227748853","cluster_addr":"192.168.123.104:6829/3227748853","heartbeat_back_addr":"192.168.123.104:6833/3227748853","heartbeat_front_addr":"192.168.123.104:6831/3227748853","state":["exists","up"]},{"osd":4,"uuid":"d1342c95-9bc8-457d-bd07-044a344312a1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6801","nonce":2821151016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6803","nonce":2821151016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6807","nonce":2821151016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6805","nonce":2821151016}]},"public_addr":"192.168.123.109:6801/2821151016","cluster_addr":"192.168.123.109:6803/2821151016","heartbeat_back_addr":"192.168.123.109:6807/2821151016","heartbeat_front_addr":"192.168.123.109:6805/2821151016","state":["exists","up"]},{"osd":5,"uuid":"a0757438-7809-4314-b9dd-37b37818922c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":34,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6809","nonce":3792197053}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6811","nonce":3792197053}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6815","nonce":3792197053}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6813","nonce":3792197053}]},"public_addr":"192.168.123.109:6809/3792197053","cluster_addr":"192.168.123.109:6811/3792197053","heartbeat_back_addr":"192.168.123.109:6815/3792197053","heartbeat_front_addr":"192.168.123.109:6813/3792197053","state":["exists","up"]},{"osd":6,"uuid":"45c78567-75e6-4026-9257-685a9df3da40","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6817","nonce":434417640}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6819","nonce":434417640}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6823","nonce":434417640}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6821","nonce":434417640}]},"public_addr":"192.168.123.109:6817/434417640","cluster_addr":"192.168.123.109:6819/434417640","heartbeat_back_addr":"192.168.123.109:6823/434417640","heartbeat_front_addr":"192.168.123.109:6821/434417640","state":["exists","up"]},{"osd":7,"uuid":"df6ed9a4-e641-43b0-965e-fef9ac178911","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6825","nonce":3755915520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6827","nonce":3755915520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6831","nonce":3755915520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6829","nonce":3755915520}]},"public_addr":"192.168.123.109:6825/3755915520","cluster_addr":"192.168.123.109:6827/3755915520","heartbeat_back_addr":"192.168.123.109:6831/3755915520","heartbeat_front_addr":"192.168.123.109:6829/3755915520","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:36.911804+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:48.734946+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:58.725426+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:09.412692+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:19.189100+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:30.522704+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:40.429409+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/2245319889":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6801/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2759174145":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2615826806":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/2830712721":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2715425455":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6801/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6800/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6800/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/1056116274":"2026-03-10T18:26:48.693718+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: purged_snaps scrub starts 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: purged_snaps scrub ok 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1376696647' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: purged_snaps scrub starts 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: purged_snaps scrub ok 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:53.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:53 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1376696647' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:53.745 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T18:28:00.728841+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '19', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T18:28:53.745 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd pool get .mgr pg_num 2026-03-09T18:28:53.929 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: purged_snaps scrub starts 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: purged_snaps scrub ok 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:53 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1376696647' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:28:54.182 INFO:teuthology.orchestra.run.vm04.stdout:pg_num: 1 2026-03-09T18:28:54.234 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm04 2026-03-09T18:28:54.234 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply rgw foo.a --placement '1;vm04=foo.a' 2026-03-09T18:28:54.434 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:54.687 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled rgw.foo.a update... 2026-03-09T18:28:54.776 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:54 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/726682385' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:28:54.776 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:54 vm09 ceph-mon[54744]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T18:28:54.776 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:54 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3144902901' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:28:54.778 DEBUG:teuthology.orchestra.run.vm04:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@rgw.foo.a.service 2026-03-09T18:28:54.779 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm09 2026-03-09T18:28:54.780 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd pool create datapool 3 3 replicated 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/726682385' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[51427]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3144902901' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/726682385' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[57581]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T18:28:54.942 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:54 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3144902901' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:28:54.977 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:55.508 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 systemd[1]: Starting Ceph rgw.foo.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:28:55.717 INFO:teuthology.orchestra.run.vm09.stderr:pool 'datapool' created 2026-03-09T18:28:55.797 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- rbd pool init datapool 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: Deploying daemon rgw.foo.a on vm04 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/3984942061' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 podman[80451]: 2026-03-09 18:28:55.507842722 +0000 UTC m=+0.021351767 container create 3aec93409a4eb1e503c4a9c0725f93c44c51551aaa236d59c1d2bb7ff75038b3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 podman[80451]: 2026-03-09 18:28:55.557160474 +0000 UTC m=+0.070669530 container init 3aec93409a4eb1e503c4a9c0725f93c44c51551aaa236d59c1d2bb7ff75038b3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 podman[80451]: 2026-03-09 18:28:55.559963471 +0000 UTC m=+0.073472516 container start 3aec93409a4eb1e503c4a9c0725f93c44c51551aaa236d59c1d2bb7ff75038b3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 bash[80451]: 3aec93409a4eb1e503c4a9c0725f93c44c51551aaa236d59c1d2bb7ff75038b3 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 podman[80451]: 2026-03-09 18:28:55.498858877 +0000 UTC m=+0.012367932 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:28:55.968 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:28:55 vm04 systemd[1]: Started Ceph rgw.foo.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: Deploying daemon rgw.foo.a on vm04 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/3984942061' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:55.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:55 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:55.990 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.016 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: Deploying daemon rgw.foo.a on vm04 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/3984942061' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.017 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:55 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/189350725' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/1632270089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/189350725' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/1632270089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:56 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/189350725' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/1632270089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:57.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:56 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: Checking dashboard <-> RGW credentials 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: pgmap v95: 36 pgs: 1 creating+peering, 34 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:57 vm09 ceph-mon[54744]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: Checking dashboard <-> RGW credentials 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: pgmap v95: 36 pgs: 1 creating+peering, 34 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[57581]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: Checking dashboard <-> RGW credentials 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: pgmap v95: 36 pgs: 1 creating+peering, 34 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:57 vm04 ceph-mon[51427]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:28:58.804 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.109 --placement '1;vm09=iscsi.a' 2026-03-09T18:28:58.995 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:58 vm09 ceph-mon[54744]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:28:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:58 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:58 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:58 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:58 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[57581]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[51427]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:58 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:28:59.250 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled iscsi.datapool update... 2026-03-09T18:28:59.302 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T18:28:59.302 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:28:59.302 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T18:28:59.331 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:28:59.331 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T18:28:59.359 DEBUG:teuthology.orchestra.run.vm09:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@iscsi.iscsi.a.service 2026-03-09T18:28:59.402 INFO:tasks.cephadm:Adding prometheus.a on vm09 2026-03-09T18:28:59.402 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply prometheus '1;vm09=a' 2026-03-09T18:28:59.627 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:28:59.886 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled prometheus update... 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: pgmap v98: 68 pgs: 26 active+clean, 3 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 815 B/s wr, 3 op/s 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:28:59.909 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:28:59 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:28:59.948 DEBUG:teuthology.orchestra.run.vm09:prometheus.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@prometheus.a.service 2026-03-09T18:28:59.950 INFO:tasks.cephadm:Adding node-exporter.a on vm04 2026-03-09T18:28:59.950 INFO:tasks.cephadm:Adding node-exporter.b on vm09 2026-03-09T18:28:59.950 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply node-exporter '2;vm04=a;vm09=b' 2026-03-09T18:29:00.178 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: pgmap v98: 68 pgs: 26 active+clean, 3 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 815 B/s wr, 3 op/s 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: pgmap v98: 68 pgs: 26 active+clean, 3 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 815 B/s wr, 3 op/s 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:28:59 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:29:00.430 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled node-exporter update... 2026-03-09T18:29:00.484 DEBUG:teuthology.orchestra.run.vm04:node-exporter.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.a.service 2026-03-09T18:29:00.486 DEBUG:teuthology.orchestra.run.vm09:node-exporter.b> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.b.service 2026-03-09T18:29:00.488 INFO:tasks.cephadm:Adding alertmanager.a on vm04 2026-03-09T18:29:00.488 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply alertmanager '1;vm04=a' 2026-03-09T18:29:00.715 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:29:01.052 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled alertmanager update... 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='client.24335 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='client.24364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.053 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:00 vm09 ceph-mon[54744]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:29:01.125 DEBUG:teuthology.orchestra.run.vm04:alertmanager.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@alertmanager.a.service 2026-03-09T18:29:01.127 INFO:tasks.cephadm:Adding grafana.a on vm09 2026-03-09T18:29:01.127 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph orch apply grafana '1;vm09=a' 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='client.24335 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='client.24364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[57581]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='client.24335 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='client.24364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T18:29:01.150 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:00 vm04 ceph-mon[51427]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:29:01.307 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:29:01.545 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled grafana update... 2026-03-09T18:29:01.610 DEBUG:teuthology.orchestra.run.vm09:grafana.a> sudo journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@grafana.a.service 2026-03-09T18:29:01.611 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T18:29:01.612 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:29:01.815 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:02.094 INFO:teuthology.orchestra.run.vm04.stdout:[client.0] 2026-03-09T18:29:02.094 INFO:teuthology.orchestra.run.vm04.stdout: key = AQBuEa9pMKpKBRAAd+KntXxDMFQVS3AVRu45mQ== 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: Saving service node-exporter spec with placement vm04=a;vm09=b;count:2 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: pgmap v101: 100 pgs: 45 active+clean, 7 creating+peering, 48 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.24376 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: Saving service alertmanager spec with placement vm04=a;count:1 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.140 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: Saving service node-exporter spec with placement vm04=a;vm09=b;count:2 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: pgmap v101: 100 pgs: 45 active+clean, 7 creating+peering, 48 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.24376 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: Saving service alertmanager spec with placement vm04=a;count:1 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.141 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:02 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.169 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:29:02.169 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T18:29:02.169 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T18:29:02.207 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.24370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: Saving service node-exporter spec with placement vm04=a;vm09=b;count:2 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: pgmap v101: 100 pgs: 45 active+clean, 7 creating+peering, 48 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.24376 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: Saving service alertmanager spec with placement vm04=a;count:1 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:02.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:29:02.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:02 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:29:02.391 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.b/config 2026-03-09T18:29:02.655 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-09T18:29:02.655 INFO:teuthology.orchestra.run.vm09.stdout: key = AQBuEa9phnHFJhAAwne+1mqf1QgFdiMRFS5HsQ== 2026-03-09T18:29:02.711 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:29:02.711 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T18:29:02.711 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T18:29:02.749 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T18:29:02.750 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T18:29:02.750 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mgr dump --format=json 2026-03-09T18:29:02.954 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:03.214 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/794292263' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/3131858292' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.290 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/794292263' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/3131858292' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.291 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:03 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.292 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":3307881853},{"type":"v1","addr":"192.168.123.104:6801","nonce":3307881853}]},"active_addr":"192.168.123.104:6801/3307881853","active_change":"2026-03-09T18:26:58.489866+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14211,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.104:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":4196008315}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":3219053921}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":3411863413}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":782779760}]}]} 2026-03-09T18:29:03.293 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T18:29:03.293 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T18:29:03.293 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd dump --format=json 2026-03-09T18:29:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/794292263' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/3131858292' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3208771543' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3802240141' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:03 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:29:03.469 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:03.702 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:03.702 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":53,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","created":"2026-03-09T18:26:36.478572+0000","modified":"2026-03-09T18:29:02.790526+0000","last_up_change":"2026-03-09T18:28:52.637078+0000","last_in_change":"2026-03-09T18:28:41.966037+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:28:00.728841+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T18:28:55.222016+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T18:28:55.600539+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T18:28:56.837423+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T18:28:58.775197+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T18:29:00.939270+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"025c88ca-fa01-4cbd-9d6d-c54757ade897","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6803","nonce":1654539160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6805","nonce":1654539160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6809","nonce":1654539160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6807","nonce":1654539160}]},"public_addr":"192.168.123.104:6803/1654539160","cluster_addr":"192.168.123.104:6805/1654539160","heartbeat_back_addr":"192.168.123.104:6809/1654539160","heartbeat_front_addr":"192.168.123.104:6807/1654539160","state":["exists","up"]},{"osd":1,"uuid":"f62082e3-9d11-4672-a72c-53d7908dbcd4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6811","nonce":3519470547}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6813","nonce":3519470547}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6817","nonce":3519470547}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6815","nonce":3519470547}]},"public_addr":"192.168.123.104:6811/3519470547","cluster_addr":"192.168.123.104:6813/3519470547","heartbeat_back_addr":"192.168.123.104:6817/3519470547","heartbeat_front_addr":"192.168.123.104:6815/3519470547","state":["exists","up"]},{"osd":2,"uuid":"9c64b919-8d93-49bb-84a4-7291defe1cb0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6819","nonce":1080091581}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6821","nonce":1080091581}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6825","nonce":1080091581}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6823","nonce":1080091581}]},"public_addr":"192.168.123.104:6819/1080091581","cluster_addr":"192.168.123.104:6821/1080091581","heartbeat_back_addr":"192.168.123.104:6825/1080091581","heartbeat_front_addr":"192.168.123.104:6823/1080091581","state":["exists","up"]},{"osd":3,"uuid":"c3feb6a9-175f-4b52-934d-734e9f86504a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6826","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6827","nonce":3227748853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6828","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6829","nonce":3227748853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6832","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6833","nonce":3227748853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6830","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6831","nonce":3227748853}]},"public_addr":"192.168.123.104:6827/3227748853","cluster_addr":"192.168.123.104:6829/3227748853","heartbeat_back_addr":"192.168.123.104:6833/3227748853","heartbeat_front_addr":"192.168.123.104:6831/3227748853","state":["exists","up"]},{"osd":4,"uuid":"d1342c95-9bc8-457d-bd07-044a344312a1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6801","nonce":2821151016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6803","nonce":2821151016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6807","nonce":2821151016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6805","nonce":2821151016}]},"public_addr":"192.168.123.109:6801/2821151016","cluster_addr":"192.168.123.109:6803/2821151016","heartbeat_back_addr":"192.168.123.109:6807/2821151016","heartbeat_front_addr":"192.168.123.109:6805/2821151016","state":["exists","up"]},{"osd":5,"uuid":"a0757438-7809-4314-b9dd-37b37818922c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6809","nonce":3792197053}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6811","nonce":3792197053}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6815","nonce":3792197053}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6813","nonce":3792197053}]},"public_addr":"192.168.123.109:6809/3792197053","cluster_addr":"192.168.123.109:6811/3792197053","heartbeat_back_addr":"192.168.123.109:6815/3792197053","heartbeat_front_addr":"192.168.123.109:6813/3792197053","state":["exists","up"]},{"osd":6,"uuid":"45c78567-75e6-4026-9257-685a9df3da40","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6817","nonce":434417640}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6819","nonce":434417640}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6823","nonce":434417640}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6821","nonce":434417640}]},"public_addr":"192.168.123.109:6817/434417640","cluster_addr":"192.168.123.109:6819/434417640","heartbeat_back_addr":"192.168.123.109:6823/434417640","heartbeat_front_addr":"192.168.123.109:6821/434417640","state":["exists","up"]},{"osd":7,"uuid":"df6ed9a4-e641-43b0-965e-fef9ac178911","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6825","nonce":3755915520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6827","nonce":3755915520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6831","nonce":3755915520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6829","nonce":3755915520}]},"public_addr":"192.168.123.109:6825/3755915520","cluster_addr":"192.168.123.109:6827/3755915520","heartbeat_back_addr":"192.168.123.109:6831/3755915520","heartbeat_front_addr":"192.168.123.109:6829/3755915520","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:36.911804+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:48.734946+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:58.725426+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:09.412692+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:19.189100+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:30.522704+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:40.429409+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:50.812187+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/2245319889":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6801/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2759174145":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2615826806":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/2830712721":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2715425455":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6801/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6800/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6800/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/1056116274":"2026-03-10T18:26:48.693718+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:29:03.749 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T18:29:03.750 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd dump --format=json 2026-03-09T18:29:03.925 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:29:03 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a[80463]: 2026-03-09T18:29:03.920+0000 7f2e09899980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T18:29:04.026 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: pgmap v104: 132 pgs: 86 active+clean, 24 creating+peering, 22 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2626743500' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3339642664' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[57581]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: pgmap v104: 132 pgs: 86 active+clean, 24 creating+peering, 22 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2626743500' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3339642664' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:04 vm04 ceph-mon[51427]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:29:04.318 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:04.318 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":54,"fsid":"5769e1c8-1be5-11f1-a591-591820987f3e","created":"2026-03-09T18:26:36.478572+0000","modified":"2026-03-09T18:29:03.793273+0000","last_up_change":"2026-03-09T18:28:52.637078+0000","last_in_change":"2026-03-09T18:28:41.966037+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:28:00.728841+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T18:28:55.222016+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T18:28:55.600539+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T18:28:56.837423+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T18:28:58.775197+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T18:29:00.939270+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"025c88ca-fa01-4cbd-9d6d-c54757ade897","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6803","nonce":1654539160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6805","nonce":1654539160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6809","nonce":1654539160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1654539160},{"type":"v1","addr":"192.168.123.104:6807","nonce":1654539160}]},"public_addr":"192.168.123.104:6803/1654539160","cluster_addr":"192.168.123.104:6805/1654539160","heartbeat_back_addr":"192.168.123.104:6809/1654539160","heartbeat_front_addr":"192.168.123.104:6807/1654539160","state":["exists","up"]},{"osd":1,"uuid":"f62082e3-9d11-4672-a72c-53d7908dbcd4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6811","nonce":3519470547}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6813","nonce":3519470547}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6817","nonce":3519470547}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3519470547},{"type":"v1","addr":"192.168.123.104:6815","nonce":3519470547}]},"public_addr":"192.168.123.104:6811/3519470547","cluster_addr":"192.168.123.104:6813/3519470547","heartbeat_back_addr":"192.168.123.104:6817/3519470547","heartbeat_front_addr":"192.168.123.104:6815/3519470547","state":["exists","up"]},{"osd":2,"uuid":"9c64b919-8d93-49bb-84a4-7291defe1cb0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6819","nonce":1080091581}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6821","nonce":1080091581}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6824","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6825","nonce":1080091581}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":1080091581},{"type":"v1","addr":"192.168.123.104:6823","nonce":1080091581}]},"public_addr":"192.168.123.104:6819/1080091581","cluster_addr":"192.168.123.104:6821/1080091581","heartbeat_back_addr":"192.168.123.104:6825/1080091581","heartbeat_front_addr":"192.168.123.104:6823/1080091581","state":["exists","up"]},{"osd":3,"uuid":"c3feb6a9-175f-4b52-934d-734e9f86504a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6826","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6827","nonce":3227748853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6828","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6829","nonce":3227748853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6832","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6833","nonce":3227748853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6830","nonce":3227748853},{"type":"v1","addr":"192.168.123.104:6831","nonce":3227748853}]},"public_addr":"192.168.123.104:6827/3227748853","cluster_addr":"192.168.123.104:6829/3227748853","heartbeat_back_addr":"192.168.123.104:6833/3227748853","heartbeat_front_addr":"192.168.123.104:6831/3227748853","state":["exists","up"]},{"osd":4,"uuid":"d1342c95-9bc8-457d-bd07-044a344312a1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6801","nonce":2821151016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6803","nonce":2821151016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6807","nonce":2821151016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":2821151016},{"type":"v1","addr":"192.168.123.109:6805","nonce":2821151016}]},"public_addr":"192.168.123.109:6801/2821151016","cluster_addr":"192.168.123.109:6803/2821151016","heartbeat_back_addr":"192.168.123.109:6807/2821151016","heartbeat_front_addr":"192.168.123.109:6805/2821151016","state":["exists","up"]},{"osd":5,"uuid":"a0757438-7809-4314-b9dd-37b37818922c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6809","nonce":3792197053}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6811","nonce":3792197053}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6815","nonce":3792197053}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3792197053},{"type":"v1","addr":"192.168.123.109:6813","nonce":3792197053}]},"public_addr":"192.168.123.109:6809/3792197053","cluster_addr":"192.168.123.109:6811/3792197053","heartbeat_back_addr":"192.168.123.109:6815/3792197053","heartbeat_front_addr":"192.168.123.109:6813/3792197053","state":["exists","up"]},{"osd":6,"uuid":"45c78567-75e6-4026-9257-685a9df3da40","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6817","nonce":434417640}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6819","nonce":434417640}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6823","nonce":434417640}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":434417640},{"type":"v1","addr":"192.168.123.109:6821","nonce":434417640}]},"public_addr":"192.168.123.109:6817/434417640","cluster_addr":"192.168.123.109:6819/434417640","heartbeat_back_addr":"192.168.123.109:6823/434417640","heartbeat_front_addr":"192.168.123.109:6821/434417640","state":["exists","up"]},{"osd":7,"uuid":"df6ed9a4-e641-43b0-965e-fef9ac178911","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6825","nonce":3755915520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6827","nonce":3755915520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6831","nonce":3755915520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3755915520},{"type":"v1","addr":"192.168.123.109:6829","nonce":3755915520}]},"public_addr":"192.168.123.109:6825/3755915520","cluster_addr":"192.168.123.109:6827/3755915520","heartbeat_back_addr":"192.168.123.109:6831/3755915520","heartbeat_front_addr":"192.168.123.109:6829/3755915520","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:36.911804+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:48.734946+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:27:58.725426+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:09.412692+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:19.189100+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:30.522704+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:40.429409+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:28:50.812187+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/2245319889":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6801/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2759174145":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2615826806":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/2830712721":"2026-03-10T18:26:58.489749+0000","192.168.123.104:0/2715425455":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6801/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:6800/3893019713":"2026-03-10T18:26:58.489749+0000","192.168.123.104:6800/2403102279":"2026-03-10T18:26:48.693718+0000","192.168.123.104:0/1056116274":"2026-03-10T18:26:48.693718+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: pgmap v104: 132 pgs: 86 active+clean, 24 creating+peering, 22 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2626743500' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3339642664' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:29:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:04 vm09 ceph-mon[54744]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:29:04.378 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.0 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.1 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.2 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.3 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.4 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.5 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.6 flush_pg_stats 2026-03-09T18:29:04.379 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph tell osd.7 flush_pg_stats 2026-03-09T18:29:05.155 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.291 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 systemd[1]: Starting Ceph iscsi.iscsi.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:05.448 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 podman[78143]: 2026-03-09 18:29:05.309147424 +0000 UTC m=+0.018119805 container create 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 podman[78143]: 2026-03-09 18:29:05.351756772 +0000 UTC m=+0.060729162 container init 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 podman[78143]: 2026-03-09 18:29:05.355774489 +0000 UTC m=+0.064746879 container start 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0) 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 bash[78143]: 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 podman[78143]: 2026-03-09 18:29:05.30229887 +0000 UTC m=+0.011271270 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:29:05.543 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 systemd[1]: Started Ceph iscsi.iscsi.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1589494652' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T18:29:05.544 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:05 vm09 ceph-mon[54744]: pgmap v107: 132 pgs: 97 active+clean, 22 creating+peering, 13 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1589494652' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T18:29:05.600 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[57581]: pgmap v107: 132 pgs: 97 active+clean, 22 creating+peering, 13 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T18:29:05.601 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1589494652' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T18:29:05.608 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:05 vm04 ceph-mon[51427]: pgmap v107: 132 pgs: 97 active+clean, 22 creating+peering, 13 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 2 op/s 2026-03-09T18:29:05.614 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.812 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Started the configuration object watcher 2026-03-09T18:29:05.812 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Checking for config object changes every 1s 2026-03-09T18:29:05.812 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Processing osd blocklist entries for this node 2026-03-09T18:29:05.812 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Reading the configuration object to update local LIO configuration 2026-03-09T18:29:05.812 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Configuration does not have an entry for this host(vm09.local) - nothing to define to LIO 2026-03-09T18:29:05.835 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.840 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.861 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:05.861 INFO:teuthology.orchestra.run.vm04.stdout:98784247820 2026-03-09T18:29:05.862 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.3 2026-03-09T18:29:05.883 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: * Environment: production 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: Use a production WSGI server instead. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: * Debug mode: off 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug * Running on all addresses. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: * Running on all addresses. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T18:29:06.109 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:05 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T18:29:06.189 INFO:teuthology.orchestra.run.vm04.stdout:184683593732 2026-03-09T18:29:06.189 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.7 2026-03-09T18:29:06.369 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: Cluster is now healthy 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: Deploying daemon prometheus.a on vm09 2026-03-09T18:29:06.370 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[57581]: from='client.? 192.168.123.109:0/1707150373' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: Cluster is now healthy 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: Deploying daemon prometheus.a on vm09 2026-03-09T18:29:06.371 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:06 vm04 ceph-mon[51427]: from='client.? 192.168.123.109:0/1707150373' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:06.545 INFO:teuthology.orchestra.run.vm04.stdout:115964117003 2026-03-09T18:29:06.545 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.4 2026-03-09T18:29:06.592 INFO:teuthology.orchestra.run.vm04.stdout:163208757254 2026-03-09T18:29:06.592 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.6 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: Cluster is now healthy 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: Deploying daemon prometheus.a on vm09 2026-03-09T18:29:06.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:06 vm09 ceph-mon[54744]: from='client.? 192.168.123.109:0/1707150373' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:06.669 INFO:teuthology.orchestra.run.vm04.stdout:34359738387 2026-03-09T18:29:06.669 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.0 2026-03-09T18:29:06.725 INFO:teuthology.orchestra.run.vm04.stdout:141733920777 2026-03-09T18:29:06.726 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.5 2026-03-09T18:29:06.758 INFO:teuthology.orchestra.run.vm04.stdout:68719476750 2026-03-09T18:29:06.758 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.2 2026-03-09T18:29:06.761 INFO:teuthology.orchestra.run.vm04.stdout:51539607569 2026-03-09T18:29:06.761 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph osd last-stat-seq osd.1 2026-03-09T18:29:06.971 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:07.064 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:07.617 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:07.670 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:07.671 INFO:teuthology.orchestra.run.vm04.stdout:98784247820 2026-03-09T18:29:07.680 INFO:teuthology.orchestra.run.vm04.stdout:184683593732 2026-03-09T18:29:07.709 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:07.905 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247820 got 98784247820 for osd.3 2026-03-09T18:29:07.905 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:07.914 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593732 got 184683593732 for osd.7 2026-03-09T18:29:07.914 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.4 KiB/s wr, 192 op/s 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3246631070' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/589015406' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[51427]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.4 KiB/s wr, 192 op/s 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3246631070' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/589015406' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:29:07.928 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:07 vm04 ceph-mon[57581]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:29:08.086 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.4 KiB/s wr, 192 op/s 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3246631070' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/589015406' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:29:08.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:07 vm09 ceph-mon[54744]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:29:08.208 INFO:teuthology.orchestra.run.vm04.stdout:115964117003 2026-03-09T18:29:08.250 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:08.284 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:08.416 INFO:teuthology.orchestra.run.vm04.stdout:163208757254 2026-03-09T18:29:08.609 INFO:tasks.cephadm.ceph_manager.ceph:need seq 115964117003 got 115964117003 for osd.4 2026-03-09T18:29:08.609 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:08.629 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757254 got 163208757254 for osd.6 2026-03-09T18:29:08.630 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:08.666 INFO:teuthology.orchestra.run.vm04.stdout:68719476751 2026-03-09T18:29:08.790 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476750 got 68719476751 for osd.2 2026-03-09T18:29:08.790 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:08.869 INFO:teuthology.orchestra.run.vm04.stdout:34359738387 2026-03-09T18:29:08.869 INFO:teuthology.orchestra.run.vm04.stdout:141733920778 2026-03-09T18:29:08.943 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738387 got 34359738387 for osd.0 2026-03-09T18:29:08.943 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:08.944 INFO:teuthology.orchestra.run.vm04.stdout:51539607569 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3494604117' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1473415734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4135855093' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3494604117' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1473415734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:29:09.004 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:08 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4135855093' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:29:09.005 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607569 got 51539607569 for osd.1 2026-03-09T18:29:09.005 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:09.015 INFO:tasks.cephadm.ceph_manager.ceph:need seq 141733920777 got 141733920778 for osd.5 2026-03-09T18:29:09.015 DEBUG:teuthology.parallel:result is None 2026-03-09T18:29:09.016 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T18:29:09.016 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph pg dump --format=json 2026-03-09T18:29:09.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:08 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3494604117' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:29:09.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:08 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1473415734' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:29:09.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:08 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4135855093' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:29:09.241 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:09.466 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:09.469 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-09T18:29:09.542 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":110,"stamp":"2026-03-09T18:29:08.709144+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":776,"num_read_kb":519,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":392,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220836,"kb_used_data":6152,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518556,"statfs":{"total":171765137408,"available":171539001344,"internally_reserved":0,"allocated":6299648,"data_stored":3354165,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[0,0,0,0,0,2],"upper_bound":64},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4364,"num_objects":182,"num_object_clones":0,"num_object_copies":546,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":182,"num_whiteouts":0,"num_read":709,"num_read_kb":465,"num_write":424,"num_write_kb":37,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"6.001338"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819438+0000","last_change":"2026-03-09T18:28:56.743478+0000","last_active":"2026-03-09T18:29:03.819438+0000","last_peered":"2026-03-09T18:29:03.819438+0000","last_clean":"2026-03-09T18:29:03.819438+0000","last_became_active":"2026-03-09T18:28:56.743256+0000","last_became_peered":"2026-03-09T18:28:56.743256+0000","last_unstale":"2026-03-09T18:29:03.819438+0000","last_undegraded":"2026-03-09T18:29:03.819438+0000","last_fullsized":"2026-03-09T18:29:03.819438+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:56:49.823169+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.314304+0000","last_change":"2026-03-09T18:28:58.778397+0000","last_active":"2026-03-09T18:29:04.314304+0000","last_peered":"2026-03-09T18:29:04.314304+0000","last_clean":"2026-03-09T18:29:04.314304+0000","last_became_active":"2026-03-09T18:28:58.778104+0000","last_became_peered":"2026-03-09T18:28:58.778104+0000","last_unstale":"2026-03-09T18:29:04.314304+0000","last_undegraded":"2026-03-09T18:29:04.314304+0000","last_fullsized":"2026-03-09T18:29:04.314304+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:50:15.933694+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825961+0000","last_change":"2026-03-09T18:29:00.809312+0000","last_active":"2026-03-09T18:29:03.825961+0000","last_peered":"2026-03-09T18:29:03.825961+0000","last_clean":"2026-03-09T18:29:03.825961+0000","last_became_active":"2026-03-09T18:29:00.809035+0000","last_became_peered":"2026-03-09T18:29:00.809035+0000","last_unstale":"2026-03-09T18:29:03.825961+0000","last_undegraded":"2026-03-09T18:29:03.825961+0000","last_fullsized":"2026-03-09T18:29:03.825961+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:07:55.782886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829422+0000","last_change":"2026-03-09T18:29:02.831406+0000","last_active":"2026-03-09T18:29:03.829422+0000","last_peered":"2026-03-09T18:29:03.829422+0000","last_clean":"2026-03-09T18:29:03.829422+0000","last_became_active":"2026-03-09T18:29:02.830785+0000","last_became_peered":"2026-03-09T18:29:02.830785+0000","last_unstale":"2026-03-09T18:29:03.829422+0000","last_undegraded":"2026-03-09T18:29:03.829422+0000","last_fullsized":"2026-03-09T18:29:03.829422+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:39:02.176302+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816194+0000","last_change":"2026-03-09T18:29:02.826043+0000","last_active":"2026-03-09T18:29:03.816194+0000","last_peered":"2026-03-09T18:29:03.816194+0000","last_clean":"2026-03-09T18:29:03.816194+0000","last_became_active":"2026-03-09T18:29:02.825906+0000","last_became_peered":"2026-03-09T18:29:02.825906+0000","last_unstale":"2026-03-09T18:29:03.816194+0000","last_undegraded":"2026-03-09T18:29:03.816194+0000","last_fullsized":"2026-03-09T18:29:03.816194+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:32:52.061487+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816221+0000","last_change":"2026-03-09T18:28:56.744170+0000","last_active":"2026-03-09T18:29:03.816221+0000","last_peered":"2026-03-09T18:29:03.816221+0000","last_clean":"2026-03-09T18:29:03.816221+0000","last_became_active":"2026-03-09T18:28:56.742551+0000","last_became_peered":"2026-03-09T18:28:56.742551+0000","last_unstale":"2026-03-09T18:29:03.816221+0000","last_undegraded":"2026-03-09T18:29:03.816221+0000","last_fullsized":"2026-03-09T18:29:03.816221+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:45.789965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.677236+0000","last_change":"2026-03-09T18:28:58.753837+0000","last_active":"2026-03-09T18:29:04.677236+0000","last_peered":"2026-03-09T18:29:04.677236+0000","last_clean":"2026-03-09T18:29:04.677236+0000","last_became_active":"2026-03-09T18:28:58.753718+0000","last_became_peered":"2026-03-09T18:28:58.753718+0000","last_unstale":"2026-03-09T18:29:04.677236+0000","last_undegraded":"2026-03-09T18:29:04.677236+0000","last_fullsized":"2026-03-09T18:29:04.677236+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:26:07.802012+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828417+0000","last_change":"2026-03-09T18:29:00.809301+0000","last_active":"2026-03-09T18:29:03.828417+0000","last_peered":"2026-03-09T18:29:03.828417+0000","last_clean":"2026-03-09T18:29:03.828417+0000","last_became_active":"2026-03-09T18:29:00.809202+0000","last_became_peered":"2026-03-09T18:29:00.809202+0000","last_unstale":"2026-03-09T18:29:03.828417+0000","last_undegraded":"2026-03-09T18:29:03.828417+0000","last_fullsized":"2026-03-09T18:29:03.828417+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:23.808124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708802+0000","last_change":"2026-03-09T18:28:56.741149+0000","last_active":"2026-03-09T18:29:07.708802+0000","last_peered":"2026-03-09T18:29:07.708802+0000","last_clean":"2026-03-09T18:29:07.708802+0000","last_became_active":"2026-03-09T18:28:56.741074+0000","last_became_peered":"2026-03-09T18:28:56.741074+0000","last_unstale":"2026-03-09T18:29:07.708802+0000","last_undegraded":"2026-03-09T18:29:07.708802+0000","last_fullsized":"2026-03-09T18:29:07.708802+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:43:45.473904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.521615+0000","last_change":"2026-03-09T18:28:58.752473+0000","last_active":"2026-03-09T18:29:04.521615+0000","last_peered":"2026-03-09T18:29:04.521615+0000","last_clean":"2026-03-09T18:29:04.521615+0000","last_became_active":"2026-03-09T18:28:58.752379+0000","last_became_peered":"2026-03-09T18:28:58.752379+0000","last_unstale":"2026-03-09T18:29:04.521615+0000","last_undegraded":"2026-03-09T18:29:04.521615+0000","last_fullsized":"2026-03-09T18:29:04.521615+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:55.685837+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708828+0000","last_change":"2026-03-09T18:29:00.816852+0000","last_active":"2026-03-09T18:29:07.708828+0000","last_peered":"2026-03-09T18:29:07.708828+0000","last_clean":"2026-03-09T18:29:07.708828+0000","last_became_active":"2026-03-09T18:29:00.809225+0000","last_became_peered":"2026-03-09T18:29:00.809225+0000","last_unstale":"2026-03-09T18:29:07.708828+0000","last_undegraded":"2026-03-09T18:29:07.708828+0000","last_fullsized":"2026-03-09T18:29:07.708828+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:44.330970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819774+0000","last_change":"2026-03-09T18:29:02.825582+0000","last_active":"2026-03-09T18:29:03.819774+0000","last_peered":"2026-03-09T18:29:03.819774+0000","last_clean":"2026-03-09T18:29:03.819774+0000","last_became_active":"2026-03-09T18:29:02.825489+0000","last_became_peered":"2026-03-09T18:29:02.825489+0000","last_unstale":"2026-03-09T18:29:03.819774+0000","last_undegraded":"2026-03-09T18:29:03.819774+0000","last_fullsized":"2026-03-09T18:29:03.819774+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:39:29.679281+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708934+0000","last_change":"2026-03-09T18:28:56.736006+0000","last_active":"2026-03-09T18:29:07.708934+0000","last_peered":"2026-03-09T18:29:07.708934+0000","last_clean":"2026-03-09T18:29:07.708934+0000","last_became_active":"2026-03-09T18:28:56.735874+0000","last_became_peered":"2026-03-09T18:28:56.735874+0000","last_unstale":"2026-03-09T18:29:07.708934+0000","last_undegraded":"2026-03-09T18:29:07.708934+0000","last_fullsized":"2026-03-09T18:29:07.708934+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:18:33.143247+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.492396+0000","last_change":"2026-03-09T18:28:58.768364+0000","last_active":"2026-03-09T18:29:04.492396+0000","last_peered":"2026-03-09T18:29:04.492396+0000","last_clean":"2026-03-09T18:29:04.492396+0000","last_became_active":"2026-03-09T18:28:58.766879+0000","last_became_peered":"2026-03-09T18:28:58.766879+0000","last_unstale":"2026-03-09T18:29:04.492396+0000","last_undegraded":"2026-03-09T18:29:04.492396+0000","last_fullsized":"2026-03-09T18:29:04.492396+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:08:46.335523+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798699+0000","last_change":"2026-03-09T18:29:00.808284+0000","last_active":"2026-03-09T18:29:03.798699+0000","last_peered":"2026-03-09T18:29:03.798699+0000","last_clean":"2026-03-09T18:29:03.798699+0000","last_became_active":"2026-03-09T18:29:00.808181+0000","last_became_peered":"2026-03-09T18:29:00.808181+0000","last_unstale":"2026-03-09T18:29:03.798699+0000","last_undegraded":"2026-03-09T18:29:03.798699+0000","last_fullsized":"2026-03-09T18:29:03.798699+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:09:43.517918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708909+0000","last_change":"2026-03-09T18:29:02.836031+0000","last_active":"2026-03-09T18:29:07.708909+0000","last_peered":"2026-03-09T18:29:07.708909+0000","last_clean":"2026-03-09T18:29:07.708909+0000","last_became_active":"2026-03-09T18:29:02.835445+0000","last_became_peered":"2026-03-09T18:29:02.835445+0000","last_unstale":"2026-03-09T18:29:07.708909+0000","last_undegraded":"2026-03-09T18:29:07.708909+0000","last_fullsized":"2026-03-09T18:29:07.708909+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:29.959920+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829361+0000","last_change":"2026-03-09T18:29:02.831338+0000","last_active":"2026-03-09T18:29:03.829361+0000","last_peered":"2026-03-09T18:29:03.829361+0000","last_clean":"2026-03-09T18:29:03.829361+0000","last_became_active":"2026-03-09T18:29:02.829892+0000","last_became_peered":"2026-03-09T18:29:02.829892+0000","last_unstale":"2026-03-09T18:29:03.829361+0000","last_undegraded":"2026-03-09T18:29:03.829361+0000","last_fullsized":"2026-03-09T18:29:03.829361+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:05:41.692572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819175+0000","last_change":"2026-03-09T18:28:56.751172+0000","last_active":"2026-03-09T18:29:03.819175+0000","last_peered":"2026-03-09T18:29:03.819175+0000","last_clean":"2026-03-09T18:29:03.819175+0000","last_became_active":"2026-03-09T18:28:56.750897+0000","last_became_peered":"2026-03-09T18:28:56.750897+0000","last_unstale":"2026-03-09T18:29:03.819175+0000","last_undegraded":"2026-03-09T18:29:03.819175+0000","last_fullsized":"2026-03-09T18:29:03.819175+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:43:10.905138+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.569015+0000","last_change":"2026-03-09T18:28:58.755586+0000","last_active":"2026-03-09T18:29:04.569015+0000","last_peered":"2026-03-09T18:29:04.569015+0000","last_clean":"2026-03-09T18:29:04.569015+0000","last_became_active":"2026-03-09T18:28:58.755350+0000","last_became_peered":"2026-03-09T18:28:58.755350+0000","last_unstale":"2026-03-09T18:29:04.569015+0000","last_undegraded":"2026-03-09T18:29:04.569015+0000","last_fullsized":"2026-03-09T18:29:04.569015+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:17:38.599026+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825741+0000","last_change":"2026-03-09T18:29:00.809132+0000","last_active":"2026-03-09T18:29:03.825741+0000","last_peered":"2026-03-09T18:29:03.825741+0000","last_clean":"2026-03-09T18:29:03.825741+0000","last_became_active":"2026-03-09T18:29:00.808854+0000","last_became_peered":"2026-03-09T18:29:00.808854+0000","last_unstale":"2026-03-09T18:29:03.825741+0000","last_undegraded":"2026-03-09T18:29:03.825741+0000","last_fullsized":"2026-03-09T18:29:03.825741+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:53:09.736568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.817063+0000","last_change":"2026-03-09T18:29:02.831100+0000","last_active":"2026-03-09T18:29:03.817063+0000","last_peered":"2026-03-09T18:29:03.817063+0000","last_clean":"2026-03-09T18:29:03.817063+0000","last_became_active":"2026-03-09T18:29:02.831000+0000","last_became_peered":"2026-03-09T18:29:02.831000+0000","last_unstale":"2026-03-09T18:29:03.817063+0000","last_undegraded":"2026-03-09T18:29:03.817063+0000","last_fullsized":"2026-03-09T18:29:03.817063+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:48:40.461585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828823+0000","last_change":"2026-03-09T18:28:56.743538+0000","last_active":"2026-03-09T18:29:03.828823+0000","last_peered":"2026-03-09T18:29:03.828823+0000","last_clean":"2026-03-09T18:29:03.828823+0000","last_became_active":"2026-03-09T18:28:56.737887+0000","last_became_peered":"2026-03-09T18:28:56.737887+0000","last_unstale":"2026-03-09T18:29:03.828823+0000","last_undegraded":"2026-03-09T18:29:03.828823+0000","last_fullsized":"2026-03-09T18:29:03.828823+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:12.364439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":44,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.680200+0000","last_change":"2026-03-09T18:28:58.763714+0000","last_active":"2026-03-09T18:29:04.680200+0000","last_peered":"2026-03-09T18:29:04.680200+0000","last_clean":"2026-03-09T18:29:04.680200+0000","last_became_active":"2026-03-09T18:28:58.763113+0000","last_became_peered":"2026-03-09T18:28:58.763113+0000","last_unstale":"2026-03-09T18:29:04.680200+0000","last_undegraded":"2026-03-09T18:29:04.680200+0000","last_fullsized":"2026-03-09T18:29:04.680200+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:02.312889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828799+0000","last_change":"2026-03-09T18:29:00.811578+0000","last_active":"2026-03-09T18:29:03.828799+0000","last_peered":"2026-03-09T18:29:03.828799+0000","last_clean":"2026-03-09T18:29:03.828799+0000","last_became_active":"2026-03-09T18:29:00.811434+0000","last_became_peered":"2026-03-09T18:29:00.811434+0000","last_unstale":"2026-03-09T18:29:03.828799+0000","last_undegraded":"2026-03-09T18:29:03.828799+0000","last_fullsized":"2026-03-09T18:29:03.828799+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:47:02.563622+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799582+0000","last_change":"2026-03-09T18:29:02.831962+0000","last_active":"2026-03-09T18:29:03.799582+0000","last_peered":"2026-03-09T18:29:03.799582+0000","last_clean":"2026-03-09T18:29:03.799582+0000","last_became_active":"2026-03-09T18:29:02.831624+0000","last_became_peered":"2026-03-09T18:29:02.831624+0000","last_unstale":"2026-03-09T18:29:03.799582+0000","last_undegraded":"2026-03-09T18:29:03.799582+0000","last_fullsized":"2026-03-09T18:29:03.799582+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:47.388332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826476+0000","last_change":"2026-03-09T18:28:56.748709+0000","last_active":"2026-03-09T18:29:03.826476+0000","last_peered":"2026-03-09T18:29:03.826476+0000","last_clean":"2026-03-09T18:29:03.826476+0000","last_became_active":"2026-03-09T18:28:56.745198+0000","last_became_peered":"2026-03-09T18:28:56.745198+0000","last_unstale":"2026-03-09T18:29:03.826476+0000","last_undegraded":"2026-03-09T18:29:03.826476+0000","last_fullsized":"2026-03-09T18:29:03.826476+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:51.347690+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.664205+0000","last_change":"2026-03-09T18:28:58.958989+0000","last_active":"2026-03-09T18:29:04.664205+0000","last_peered":"2026-03-09T18:29:04.664205+0000","last_clean":"2026-03-09T18:29:04.664205+0000","last_became_active":"2026-03-09T18:28:58.958622+0000","last_became_peered":"2026-03-09T18:28:58.958622+0000","last_unstale":"2026-03-09T18:29:04.664205+0000","last_undegraded":"2026-03-09T18:29:04.664205+0000","last_fullsized":"2026-03-09T18:29:04.664205+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:21:27.651666+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.238902+0000","last_change":"2026-03-09T18:29:00.809873+0000","last_active":"2026-03-09T18:29:04.238902+0000","last_peered":"2026-03-09T18:29:04.238902+0000","last_clean":"2026-03-09T18:29:04.238902+0000","last_became_active":"2026-03-09T18:29:00.809701+0000","last_became_peered":"2026-03-09T18:29:00.809701+0000","last_unstale":"2026-03-09T18:29:04.238902+0000","last_undegraded":"2026-03-09T18:29:04.238902+0000","last_fullsized":"2026-03-09T18:29:04.238902+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:06.847515+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.688629+0000","last_change":"2026-03-09T18:28:58.756197+0000","last_active":"2026-03-09T18:29:04.688629+0000","last_peered":"2026-03-09T18:29:04.688629+0000","last_clean":"2026-03-09T18:29:04.688629+0000","last_became_active":"2026-03-09T18:28:58.756004+0000","last_became_peered":"2026-03-09T18:28:58.756004+0000","last_unstale":"2026-03-09T18:29:04.688629+0000","last_undegraded":"2026-03-09T18:29:04.688629+0000","last_fullsized":"2026-03-09T18:29:04.688629+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:30:00.619326+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816364+0000","last_change":"2026-03-09T18:28:56.741491+0000","last_active":"2026-03-09T18:29:03.816364+0000","last_peered":"2026-03-09T18:29:03.816364+0000","last_clean":"2026-03-09T18:29:03.816364+0000","last_became_active":"2026-03-09T18:28:56.741240+0000","last_became_peered":"2026-03-09T18:28:56.741240+0000","last_unstale":"2026-03-09T18:29:03.816364+0000","last_undegraded":"2026-03-09T18:29:03.816364+0000","last_fullsized":"2026-03-09T18:29:03.816364+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:50:33.350549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.252607+0000","last_change":"2026-03-09T18:29:00.824971+0000","last_active":"2026-03-09T18:29:04.252607+0000","last_peered":"2026-03-09T18:29:04.252607+0000","last_clean":"2026-03-09T18:29:04.252607+0000","last_became_active":"2026-03-09T18:29:00.824848+0000","last_became_peered":"2026-03-09T18:29:00.824848+0000","last_unstale":"2026-03-09T18:29:04.252607+0000","last_undegraded":"2026-03-09T18:29:04.252607+0000","last_fullsized":"2026-03-09T18:29:04.252607+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:07:14.583289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708576+0000","last_change":"2026-03-09T18:29:02.809464+0000","last_active":"2026-03-09T18:29:07.708576+0000","last_peered":"2026-03-09T18:29:07.708576+0000","last_clean":"2026-03-09T18:29:07.708576+0000","last_became_active":"2026-03-09T18:29:02.809327+0000","last_became_peered":"2026-03-09T18:29:02.809327+0000","last_unstale":"2026-03-09T18:29:07.708576+0000","last_undegraded":"2026-03-09T18:29:07.708576+0000","last_fullsized":"2026-03-09T18:29:07.708576+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:57.044992+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":53,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.609963+0000","last_change":"2026-03-09T18:28:58.957906+0000","last_active":"2026-03-09T18:29:04.609963+0000","last_peered":"2026-03-09T18:29:04.609963+0000","last_clean":"2026-03-09T18:29:04.609963+0000","last_became_active":"2026-03-09T18:28:58.957209+0000","last_became_peered":"2026-03-09T18:28:58.957209+0000","last_unstale":"2026-03-09T18:29:04.609963+0000","last_undegraded":"2026-03-09T18:29:04.609963+0000","last_fullsized":"2026-03-09T18:29:04.609963+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:20:43.011755+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816423+0000","last_change":"2026-03-09T18:28:56.741679+0000","last_active":"2026-03-09T18:29:03.816423+0000","last_peered":"2026-03-09T18:29:03.816423+0000","last_clean":"2026-03-09T18:29:03.816423+0000","last_became_active":"2026-03-09T18:28:56.741410+0000","last_became_peered":"2026-03-09T18:28:56.741410+0000","last_unstale":"2026-03-09T18:29:03.816423+0000","last_undegraded":"2026-03-09T18:29:03.816423+0000","last_fullsized":"2026-03-09T18:29:03.816423+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:45:34.523525+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828481+0000","last_change":"2026-03-09T18:29:00.823863+0000","last_active":"2026-03-09T18:29:03.828481+0000","last_peered":"2026-03-09T18:29:03.828481+0000","last_clean":"2026-03-09T18:29:03.828481+0000","last_became_active":"2026-03-09T18:29:00.823760+0000","last_became_peered":"2026-03-09T18:29:00.823760+0000","last_unstale":"2026-03-09T18:29:03.828481+0000","last_undegraded":"2026-03-09T18:29:03.828481+0000","last_fullsized":"2026-03-09T18:29:03.828481+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:47:29.246968+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828456+0000","last_change":"2026-03-09T18:29:02.824568+0000","last_active":"2026-03-09T18:29:03.828456+0000","last_peered":"2026-03-09T18:29:03.828456+0000","last_clean":"2026-03-09T18:29:03.828456+0000","last_became_active":"2026-03-09T18:29:02.824449+0000","last_became_peered":"2026-03-09T18:29:02.824449+0000","last_unstale":"2026-03-09T18:29:03.828456+0000","last_undegraded":"2026-03-09T18:29:03.828456+0000","last_fullsized":"2026-03-09T18:29:03.828456+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:06:04.824142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":42,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.666861+0000","last_change":"2026-03-09T18:28:58.753916+0000","last_active":"2026-03-09T18:29:04.666861+0000","last_peered":"2026-03-09T18:29:04.666861+0000","last_clean":"2026-03-09T18:29:04.666861+0000","last_became_active":"2026-03-09T18:28:58.751911+0000","last_became_peered":"2026-03-09T18:28:58.751911+0000","last_unstale":"2026-03-09T18:29:04.666861+0000","last_undegraded":"2026-03-09T18:29:04.666861+0000","last_fullsized":"2026-03-09T18:29:04.666861+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:58:47.607740+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819237+0000","last_change":"2026-03-09T18:28:56.751235+0000","last_active":"2026-03-09T18:29:03.819237+0000","last_peered":"2026-03-09T18:29:03.819237+0000","last_clean":"2026-03-09T18:29:03.819237+0000","last_became_active":"2026-03-09T18:28:56.751007+0000","last_became_peered":"2026-03-09T18:28:56.751007+0000","last_unstale":"2026-03-09T18:29:03.819237+0000","last_undegraded":"2026-03-09T18:29:03.819237+0000","last_fullsized":"2026-03-09T18:29:03.819237+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:40:10.311698+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.248547+0000","last_change":"2026-03-09T18:29:00.811306+0000","last_active":"2026-03-09T18:29:04.248547+0000","last_peered":"2026-03-09T18:29:04.248547+0000","last_clean":"2026-03-09T18:29:04.248547+0000","last_became_active":"2026-03-09T18:29:00.811178+0000","last_became_peered":"2026-03-09T18:29:00.811178+0000","last_unstale":"2026-03-09T18:29:04.248547+0000","last_undegraded":"2026-03-09T18:29:04.248547+0000","last_fullsized":"2026-03-09T18:29:04.248547+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:32:03.352840+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798670+0000","last_change":"2026-03-09T18:29:02.824755+0000","last_active":"2026-03-09T18:29:03.798670+0000","last_peered":"2026-03-09T18:29:03.798670+0000","last_clean":"2026-03-09T18:29:03.798670+0000","last_became_active":"2026-03-09T18:29:02.824431+0000","last_became_peered":"2026-03-09T18:29:02.824431+0000","last_unstale":"2026-03-09T18:29:03.798670+0000","last_undegraded":"2026-03-09T18:29:03.798670+0000","last_fullsized":"2026-03-09T18:29:03.798670+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:23.952912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.643431+0000","last_change":"2026-03-09T18:28:58.752273+0000","last_active":"2026-03-09T18:29:04.643431+0000","last_peered":"2026-03-09T18:29:04.643431+0000","last_clean":"2026-03-09T18:29:04.643431+0000","last_became_active":"2026-03-09T18:28:58.752071+0000","last_became_peered":"2026-03-09T18:28:58.752071+0000","last_unstale":"2026-03-09T18:29:04.643431+0000","last_undegraded":"2026-03-09T18:29:04.643431+0000","last_fullsized":"2026-03-09T18:29:04.643431+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:50:32.470344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709268+0000","last_change":"2026-03-09T18:28:56.738083+0000","last_active":"2026-03-09T18:29:07.709268+0000","last_peered":"2026-03-09T18:29:07.709268+0000","last_clean":"2026-03-09T18:29:07.709268+0000","last_became_active":"2026-03-09T18:28:56.737982+0000","last_became_peered":"2026-03-09T18:28:56.737982+0000","last_unstale":"2026-03-09T18:29:07.709268+0000","last_undegraded":"2026-03-09T18:29:07.709268+0000","last_fullsized":"2026-03-09T18:29:07.709268+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:06:33.257242+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":27,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.255852+0000","last_change":"2026-03-09T18:29:00.824638+0000","last_active":"2026-03-09T18:29:04.255852+0000","last_peered":"2026-03-09T18:29:04.255852+0000","last_clean":"2026-03-09T18:29:04.255852+0000","last_became_active":"2026-03-09T18:29:00.824547+0000","last_became_peered":"2026-03-09T18:29:00.824547+0000","last_unstale":"2026-03-09T18:29:04.255852+0000","last_undegraded":"2026-03-09T18:29:04.255852+0000","last_fullsized":"2026-03-09T18:29:04.255852+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:30:50.072164+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.820142+0000","last_change":"2026-03-09T18:29:02.822449+0000","last_active":"2026-03-09T18:29:03.820142+0000","last_peered":"2026-03-09T18:29:03.820142+0000","last_clean":"2026-03-09T18:29:03.820142+0000","last_became_active":"2026-03-09T18:29:02.822218+0000","last_became_peered":"2026-03-09T18:29:02.822218+0000","last_unstale":"2026-03-09T18:29:03.820142+0000","last_undegraded":"2026-03-09T18:29:03.820142+0000","last_fullsized":"2026-03-09T18:29:03.820142+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:24:57.749688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":57,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.632583+0000","last_change":"2026-03-09T18:28:58.959129+0000","last_active":"2026-03-09T18:29:04.632583+0000","last_peered":"2026-03-09T18:29:04.632583+0000","last_clean":"2026-03-09T18:29:04.632583+0000","last_became_active":"2026-03-09T18:28:58.958838+0000","last_became_peered":"2026-03-09T18:28:58.958838+0000","last_unstale":"2026-03-09T18:29:04.632583+0000","last_undegraded":"2026-03-09T18:29:04.632583+0000","last_fullsized":"2026-03-09T18:29:04.632583+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:23:56.655830+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826370+0000","last_change":"2026-03-09T18:28:56.739172+0000","last_active":"2026-03-09T18:29:03.826370+0000","last_peered":"2026-03-09T18:29:03.826370+0000","last_clean":"2026-03-09T18:29:03.826370+0000","last_became_active":"2026-03-09T18:28:56.739065+0000","last_became_peered":"2026-03-09T18:28:56.739065+0000","last_unstale":"2026-03-09T18:29:03.826370+0000","last_undegraded":"2026-03-09T18:29:03.826370+0000","last_fullsized":"2026-03-09T18:29:03.826370+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:19:47.432313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839781+0000","last_change":"2026-03-09T18:29:00.806727+0000","last_active":"2026-03-09T18:29:03.839781+0000","last_peered":"2026-03-09T18:29:03.839781+0000","last_clean":"2026-03-09T18:29:03.839781+0000","last_became_active":"2026-03-09T18:29:00.806617+0000","last_became_peered":"2026-03-09T18:29:00.806617+0000","last_unstale":"2026-03-09T18:29:03.839781+0000","last_undegraded":"2026-03-09T18:29:03.839781+0000","last_fullsized":"2026-03-09T18:29:03.839781+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:16:07.812168+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826353+0000","last_change":"2026-03-09T18:29:02.807483+0000","last_active":"2026-03-09T18:29:03.826353+0000","last_peered":"2026-03-09T18:29:03.826353+0000","last_clean":"2026-03-09T18:29:03.826353+0000","last_became_active":"2026-03-09T18:29:02.807412+0000","last_became_peered":"2026-03-09T18:29:02.807412+0000","last_unstale":"2026-03-09T18:29:03.826353+0000","last_undegraded":"2026-03-09T18:29:03.826353+0000","last_fullsized":"2026-03-09T18:29:03.826353+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:24:23.736164+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":71,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.646601+0000","last_change":"2026-03-09T18:28:58.758395+0000","last_active":"2026-03-09T18:29:04.646601+0000","last_peered":"2026-03-09T18:29:04.646601+0000","last_clean":"2026-03-09T18:29:04.646601+0000","last_became_active":"2026-03-09T18:28:58.756605+0000","last_became_peered":"2026-03-09T18:28:58.756605+0000","last_unstale":"2026-03-09T18:29:04.646601+0000","last_undegraded":"2026-03-09T18:29:04.646601+0000","last_fullsized":"2026-03-09T18:29:04.646601+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:19:05.758695+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829010+0000","last_change":"2026-03-09T18:28:56.754612+0000","last_active":"2026-03-09T18:29:03.829010+0000","last_peered":"2026-03-09T18:29:03.829010+0000","last_clean":"2026-03-09T18:29:03.829010+0000","last_became_active":"2026-03-09T18:28:56.754491+0000","last_became_peered":"2026-03-09T18:28:56.754491+0000","last_unstale":"2026-03-09T18:29:03.829010+0000","last_undegraded":"2026-03-09T18:29:03.829010+0000","last_fullsized":"2026-03-09T18:29:03.829010+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:50.967887+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709137+0000","last_change":"2026-03-09T18:28:58.733692+0000","last_active":"2026-03-09T18:29:07.709137+0000","last_peered":"2026-03-09T18:29:07.709137+0000","last_clean":"2026-03-09T18:29:07.709137+0000","last_became_active":"2026-03-09T18:28:56.740706+0000","last_became_peered":"2026-03-09T18:28:56.740706+0000","last_unstale":"2026-03-09T18:29:07.709137+0000","last_undegraded":"2026-03-09T18:29:07.709137+0000","last_fullsized":"2026-03-09T18:29:07.709137+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:09.499101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00043812000000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819544+0000","last_change":"2026-03-09T18:29:00.811774+0000","last_active":"2026-03-09T18:29:03.819544+0000","last_peered":"2026-03-09T18:29:03.819544+0000","last_clean":"2026-03-09T18:29:03.819544+0000","last_became_active":"2026-03-09T18:29:00.810734+0000","last_became_peered":"2026-03-09T18:29:00.810734+0000","last_unstale":"2026-03-09T18:29:03.819544+0000","last_undegraded":"2026-03-09T18:29:03.819544+0000","last_fullsized":"2026-03-09T18:29:03.819544+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:17:46.567639+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816150+0000","last_change":"2026-03-09T18:29:02.835124+0000","last_active":"2026-03-09T18:29:03.816150+0000","last_peered":"2026-03-09T18:29:03.816150+0000","last_clean":"2026-03-09T18:29:03.816150+0000","last_became_active":"2026-03-09T18:29:02.833514+0000","last_became_peered":"2026-03-09T18:29:02.833514+0000","last_unstale":"2026-03-09T18:29:03.816150+0000","last_undegraded":"2026-03-09T18:29:03.816150+0000","last_fullsized":"2026-03-09T18:29:03.816150+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:53:44.975182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":48,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.549744+0000","last_change":"2026-03-09T18:28:58.752148+0000","last_active":"2026-03-09T18:29:04.549744+0000","last_peered":"2026-03-09T18:29:04.549744+0000","last_clean":"2026-03-09T18:29:04.549744+0000","last_became_active":"2026-03-09T18:28:58.751599+0000","last_became_peered":"2026-03-09T18:28:58.751599+0000","last_unstale":"2026-03-09T18:29:04.549744+0000","last_undegraded":"2026-03-09T18:29:04.549744+0000","last_fullsized":"2026-03-09T18:29:04.549744+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:34:02.381971+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826400+0000","last_change":"2026-03-09T18:28:56.754725+0000","last_active":"2026-03-09T18:29:03.826400+0000","last_peered":"2026-03-09T18:29:03.826400+0000","last_clean":"2026-03-09T18:29:03.826400+0000","last_became_active":"2026-03-09T18:28:56.754489+0000","last_became_peered":"2026-03-09T18:28:56.754489+0000","last_unstale":"2026-03-09T18:29:03.826400+0000","last_undegraded":"2026-03-09T18:29:03.826400+0000","last_fullsized":"2026-03-09T18:29:03.826400+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:53:44.130075+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798668+0000","last_change":"2026-03-09T18:28:58.745578+0000","last_active":"2026-03-09T18:29:03.798668+0000","last_peered":"2026-03-09T18:29:03.798668+0000","last_clean":"2026-03-09T18:29:03.798668+0000","last_became_active":"2026-03-09T18:28:56.739995+0000","last_became_peered":"2026-03-09T18:28:56.739995+0000","last_unstale":"2026-03-09T18:29:03.798668+0000","last_undegraded":"2026-03-09T18:29:03.798668+0000","last_fullsized":"2026-03-09T18:29:03.798668+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:44:23.980904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00082507499999999998,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.6","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798702+0000","last_change":"2026-03-09T18:29:00.809433+0000","last_active":"2026-03-09T18:29:03.798702+0000","last_peered":"2026-03-09T18:29:03.798702+0000","last_clean":"2026-03-09T18:29:03.798702+0000","last_became_active":"2026-03-09T18:29:00.808179+0000","last_became_peered":"2026-03-09T18:29:00.808179+0000","last_unstale":"2026-03-09T18:29:03.798702+0000","last_undegraded":"2026-03-09T18:29:03.798702+0000","last_fullsized":"2026-03-09T18:29:03.798702+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:55:18.612204+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798624+0000","last_change":"2026-03-09T18:29:02.838489+0000","last_active":"2026-03-09T18:29:03.798624+0000","last_peered":"2026-03-09T18:29:03.798624+0000","last_clean":"2026-03-09T18:29:03.798624+0000","last_became_active":"2026-03-09T18:29:02.838319+0000","last_became_peered":"2026-03-09T18:29:02.838319+0000","last_unstale":"2026-03-09T18:29:03.798624+0000","last_undegraded":"2026-03-09T18:29:03.798624+0000","last_fullsized":"2026-03-09T18:29:03.798624+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:29:15.884561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":39,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.563963+0000","last_change":"2026-03-09T18:28:58.759952+0000","last_active":"2026-03-09T18:29:04.563963+0000","last_peered":"2026-03-09T18:29:04.563963+0000","last_clean":"2026-03-09T18:29:04.563963+0000","last_became_active":"2026-03-09T18:28:58.759786+0000","last_became_peered":"2026-03-09T18:28:58.759786+0000","last_unstale":"2026-03-09T18:29:04.563963+0000","last_undegraded":"2026-03-09T18:29:04.563963+0000","last_fullsized":"2026-03-09T18:29:04.563963+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:12:19.211826+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.001389+0000","last_change":"2026-03-09T18:28:56.751086+0000","last_active":"2026-03-09T18:29:04.001389+0000","last_peered":"2026-03-09T18:29:04.001389+0000","last_clean":"2026-03-09T18:29:04.001389+0000","last_became_active":"2026-03-09T18:28:56.750738+0000","last_became_peered":"2026-03-09T18:28:56.750738+0000","last_unstale":"2026-03-09T18:29:04.001389+0000","last_undegraded":"2026-03-09T18:29:04.001389+0000","last_fullsized":"2026-03-09T18:29:04.001389+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:45.415664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:06.636357+0000","last_change":"2026-03-09T18:28:58.955759+0000","last_active":"2026-03-09T18:29:06.636357+0000","last_peered":"2026-03-09T18:29:06.636357+0000","last_clean":"2026-03-09T18:29:06.636357+0000","last_became_active":"2026-03-09T18:28:56.746218+0000","last_became_peered":"2026-03-09T18:28:56.746218+0000","last_unstale":"2026-03-09T18:29:06.636357+0000","last_undegraded":"2026-03-09T18:29:06.636357+0000","last_fullsized":"2026-03-09T18:29:06.636357+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:08:22.225165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00051439300000000003,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708978+0000","last_change":"2026-03-09T18:29:00.809419+0000","last_active":"2026-03-09T18:29:07.708978+0000","last_peered":"2026-03-09T18:29:07.708978+0000","last_clean":"2026-03-09T18:29:07.708978+0000","last_became_active":"2026-03-09T18:29:00.808721+0000","last_became_peered":"2026-03-09T18:29:00.808721+0000","last_unstale":"2026-03-09T18:29:07.708978+0000","last_undegraded":"2026-03-09T18:29:07.708978+0000","last_fullsized":"2026-03-09T18:29:07.708978+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:33:31.802189+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826171+0000","last_change":"2026-03-09T18:29:02.825333+0000","last_active":"2026-03-09T18:29:03.826171+0000","last_peered":"2026-03-09T18:29:03.826171+0000","last_clean":"2026-03-09T18:29:03.826171+0000","last_became_active":"2026-03-09T18:29:02.825075+0000","last_became_peered":"2026-03-09T18:29:02.825075+0000","last_unstale":"2026-03-09T18:29:03.826171+0000","last_undegraded":"2026-03-09T18:29:03.826171+0000","last_fullsized":"2026-03-09T18:29:03.826171+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:11:20.002612+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.653668+0000","last_change":"2026-03-09T18:28:58.959895+0000","last_active":"2026-03-09T18:29:04.653668+0000","last_peered":"2026-03-09T18:29:04.653668+0000","last_clean":"2026-03-09T18:29:04.653668+0000","last_became_active":"2026-03-09T18:28:58.959681+0000","last_became_peered":"2026-03-09T18:28:58.959681+0000","last_unstale":"2026-03-09T18:29:04.653668+0000","last_undegraded":"2026-03-09T18:29:04.653668+0000","last_fullsized":"2026-03-09T18:29:04.653668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:53:14.109787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816576+0000","last_change":"2026-03-09T18:28:56.744432+0000","last_active":"2026-03-09T18:29:03.816576+0000","last_peered":"2026-03-09T18:29:03.816576+0000","last_clean":"2026-03-09T18:29:03.816576+0000","last_became_active":"2026-03-09T18:28:56.744310+0000","last_became_peered":"2026-03-09T18:28:56.744310+0000","last_unstale":"2026-03-09T18:29:03.816576+0000","last_undegraded":"2026-03-09T18:29:03.816576+0000","last_fullsized":"2026-03-09T18:29:03.816576+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:48:45.163023+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":35,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799184+0000","last_change":"2026-03-09T18:28:53.997687+0000","last_active":"2026-03-09T18:29:03.799184+0000","last_peered":"2026-03-09T18:29:03.799184+0000","last_clean":"2026-03-09T18:29:03.799184+0000","last_became_active":"2026-03-09T18:28:53.690811+0000","last_became_peered":"2026-03-09T18:28:53.690811+0000","last_unstale":"2026-03-09T18:29:03.799184+0000","last_undegraded":"2026-03-09T18:29:03.799184+0000","last_fullsized":"2026-03-09T18:29:03.799184+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:01.605375+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:01.605375+0000","last_clean_scrub_stamp":"2026-03-09T18:28:01.605375+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:58.145626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799262+0000","last_change":"2026-03-09T18:29:00.824326+0000","last_active":"2026-03-09T18:29:03.799262+0000","last_peered":"2026-03-09T18:29:03.799262+0000","last_clean":"2026-03-09T18:29:03.799262+0000","last_became_active":"2026-03-09T18:29:00.824243+0000","last_became_peered":"2026-03-09T18:29:00.824243+0000","last_unstale":"2026-03-09T18:29:03.799262+0000","last_undegraded":"2026-03-09T18:29:03.799262+0000","last_fullsized":"2026-03-09T18:29:03.799262+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:14:15.092118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708853+0000","last_change":"2026-03-09T18:29:02.827077+0000","last_active":"2026-03-09T18:29:07.708853+0000","last_peered":"2026-03-09T18:29:07.708853+0000","last_clean":"2026-03-09T18:29:07.708853+0000","last_became_active":"2026-03-09T18:29:02.826978+0000","last_became_peered":"2026-03-09T18:29:02.826978+0000","last_unstale":"2026-03-09T18:29:07.708853+0000","last_undegraded":"2026-03-09T18:29:07.708853+0000","last_fullsized":"2026-03-09T18:29:07.708853+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:19:34.097009+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.658187+0000","last_change":"2026-03-09T18:28:58.779278+0000","last_active":"2026-03-09T18:29:04.658187+0000","last_peered":"2026-03-09T18:29:04.658187+0000","last_clean":"2026-03-09T18:29:04.658187+0000","last_became_active":"2026-03-09T18:28:58.777497+0000","last_became_peered":"2026-03-09T18:28:58.777497+0000","last_unstale":"2026-03-09T18:29:04.658187+0000","last_undegraded":"2026-03-09T18:29:04.658187+0000","last_fullsized":"2026-03-09T18:29:04.658187+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:02:21.813963+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828923+0000","last_change":"2026-03-09T18:28:56.743290+0000","last_active":"2026-03-09T18:29:03.828923+0000","last_peered":"2026-03-09T18:29:03.828923+0000","last_clean":"2026-03-09T18:29:03.828923+0000","last_became_active":"2026-03-09T18:28:56.742847+0000","last_became_peered":"2026-03-09T18:28:56.742847+0000","last_unstale":"2026-03-09T18:29:03.828923+0000","last_undegraded":"2026-03-09T18:29:03.828923+0000","last_fullsized":"2026-03-09T18:29:03.828923+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:01:23.665747+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709482+0000","last_change":"2026-03-09T18:29:00.816651+0000","last_active":"2026-03-09T18:29:07.709482+0000","last_peered":"2026-03-09T18:29:07.709482+0000","last_clean":"2026-03-09T18:29:07.709482+0000","last_became_active":"2026-03-09T18:29:00.816549+0000","last_became_peered":"2026-03-09T18:29:00.816549+0000","last_unstale":"2026-03-09T18:29:07.709482+0000","last_undegraded":"2026-03-09T18:29:07.709482+0000","last_fullsized":"2026-03-09T18:29:07.709482+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:29.374887+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815825+0000","last_change":"2026-03-09T18:29:02.838484+0000","last_active":"2026-03-09T18:29:03.815825+0000","last_peered":"2026-03-09T18:29:03.815825+0000","last_clean":"2026-03-09T18:29:03.815825+0000","last_became_active":"2026-03-09T18:29:02.833798+0000","last_became_peered":"2026-03-09T18:29:02.833798+0000","last_unstale":"2026-03-09T18:29:03.815825+0000","last_undegraded":"2026-03-09T18:29:03.815825+0000","last_fullsized":"2026-03-09T18:29:03.815825+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:40:52.341510+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":49,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.655565+0000","last_change":"2026-03-09T18:28:58.768445+0000","last_active":"2026-03-09T18:29:04.655565+0000","last_peered":"2026-03-09T18:29:04.655565+0000","last_clean":"2026-03-09T18:29:04.655565+0000","last_became_active":"2026-03-09T18:28:58.767278+0000","last_became_peered":"2026-03-09T18:28:58.767278+0000","last_unstale":"2026-03-09T18:29:04.655565+0000","last_undegraded":"2026-03-09T18:29:04.655565+0000","last_fullsized":"2026-03-09T18:29:04.655565+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:15:17.008984+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839795+0000","last_change":"2026-03-09T18:28:56.753643+0000","last_active":"2026-03-09T18:29:03.839795+0000","last_peered":"2026-03-09T18:29:03.839795+0000","last_clean":"2026-03-09T18:29:03.839795+0000","last_became_active":"2026-03-09T18:28:56.751726+0000","last_became_peered":"2026-03-09T18:28:56.751726+0000","last_unstale":"2026-03-09T18:29:03.839795+0000","last_undegraded":"2026-03-09T18:29:03.839795+0000","last_fullsized":"2026-03-09T18:29:03.839795+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:51:52.791289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826808+0000","last_change":"2026-03-09T18:29:00.809643+0000","last_active":"2026-03-09T18:29:03.826808+0000","last_peered":"2026-03-09T18:29:03.826808+0000","last_clean":"2026-03-09T18:29:03.826808+0000","last_became_active":"2026-03-09T18:29:00.809166+0000","last_became_peered":"2026-03-09T18:29:00.809166+0000","last_unstale":"2026-03-09T18:29:03.826808+0000","last_undegraded":"2026-03-09T18:29:03.826808+0000","last_fullsized":"2026-03-09T18:29:03.826808+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:09:16.649161+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798767+0000","last_change":"2026-03-09T18:29:02.835760+0000","last_active":"2026-03-09T18:29:03.798767+0000","last_peered":"2026-03-09T18:29:03.798767+0000","last_clean":"2026-03-09T18:29:03.798767+0000","last_became_active":"2026-03-09T18:29:02.835654+0000","last_became_peered":"2026-03-09T18:29:02.835654+0000","last_unstale":"2026-03-09T18:29:03.798767+0000","last_undegraded":"2026-03-09T18:29:03.798767+0000","last_fullsized":"2026-03-09T18:29:03.798767+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:58.669654+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.583000+0000","last_change":"2026-03-09T18:28:58.753844+0000","last_active":"2026-03-09T18:29:04.583000+0000","last_peered":"2026-03-09T18:29:04.583000+0000","last_clean":"2026-03-09T18:29:04.583000+0000","last_became_active":"2026-03-09T18:28:58.751882+0000","last_became_peered":"2026-03-09T18:28:58.751882+0000","last_unstale":"2026-03-09T18:29:04.583000+0000","last_undegraded":"2026-03-09T18:29:04.583000+0000","last_fullsized":"2026-03-09T18:29:04.583000+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:29.285191+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.000518+0000","last_change":"2026-03-09T18:28:56.739248+0000","last_active":"2026-03-09T18:29:04.000518+0000","last_peered":"2026-03-09T18:29:04.000518+0000","last_clean":"2026-03-09T18:29:04.000518+0000","last_became_active":"2026-03-09T18:28:56.739150+0000","last_became_peered":"2026-03-09T18:28:56.739150+0000","last_unstale":"2026-03-09T18:29:04.000518+0000","last_undegraded":"2026-03-09T18:29:04.000518+0000","last_fullsized":"2026-03-09T18:29:04.000518+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:25:27.605504+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":30,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.194048+0000","last_change":"2026-03-09T18:29:00.809902+0000","last_active":"2026-03-09T18:29:04.194048+0000","last_peered":"2026-03-09T18:29:04.194048+0000","last_clean":"2026-03-09T18:29:04.194048+0000","last_became_active":"2026-03-09T18:29:00.808679+0000","last_became_peered":"2026-03-09T18:29:00.808679+0000","last_unstale":"2026-03-09T18:29:04.194048+0000","last_undegraded":"2026-03-09T18:29:04.194048+0000","last_fullsized":"2026-03-09T18:29:04.194048+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:37:33.319437+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828652+0000","last_change":"2026-03-09T18:29:02.831238+0000","last_active":"2026-03-09T18:29:03.828652+0000","last_peered":"2026-03-09T18:29:03.828652+0000","last_clean":"2026-03-09T18:29:03.828652+0000","last_became_active":"2026-03-09T18:29:02.830520+0000","last_became_peered":"2026-03-09T18:29:02.830520+0000","last_unstale":"2026-03-09T18:29:03.828652+0000","last_undegraded":"2026-03-09T18:29:03.828652+0000","last_fullsized":"2026-03-09T18:29:03.828652+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:48:23.849047+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.626067+0000","last_change":"2026-03-09T18:28:58.756963+0000","last_active":"2026-03-09T18:29:04.626067+0000","last_peered":"2026-03-09T18:29:04.626067+0000","last_clean":"2026-03-09T18:29:04.626067+0000","last_became_active":"2026-03-09T18:28:58.756834+0000","last_became_peered":"2026-03-09T18:28:58.756834+0000","last_unstale":"2026-03-09T18:29:04.626067+0000","last_undegraded":"2026-03-09T18:29:04.626067+0000","last_fullsized":"2026-03-09T18:29:04.626067+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:48:38.692797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709080+0000","last_change":"2026-03-09T18:28:56.741274+0000","last_active":"2026-03-09T18:29:07.709080+0000","last_peered":"2026-03-09T18:29:07.709080+0000","last_clean":"2026-03-09T18:29:07.709080+0000","last_became_active":"2026-03-09T18:28:56.741207+0000","last_became_peered":"2026-03-09T18:28:56.741207+0000","last_unstale":"2026-03-09T18:29:07.709080+0000","last_undegraded":"2026-03-09T18:29:07.709080+0000","last_fullsized":"2026-03-09T18:29:07.709080+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:42:21.087080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798841+0000","last_change":"2026-03-09T18:29:00.817435+0000","last_active":"2026-03-09T18:29:03.798841+0000","last_peered":"2026-03-09T18:29:03.798841+0000","last_clean":"2026-03-09T18:29:03.798841+0000","last_became_active":"2026-03-09T18:29:00.817229+0000","last_became_peered":"2026-03-09T18:29:00.817229+0000","last_unstale":"2026-03-09T18:29:03.798841+0000","last_undegraded":"2026-03-09T18:29:03.798841+0000","last_fullsized":"2026-03-09T18:29:03.798841+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:23:06.790852+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.820010+0000","last_change":"2026-03-09T18:29:02.824916+0000","last_active":"2026-03-09T18:29:03.820010+0000","last_peered":"2026-03-09T18:29:03.820010+0000","last_clean":"2026-03-09T18:29:03.820010+0000","last_became_active":"2026-03-09T18:29:02.824795+0000","last_became_peered":"2026-03-09T18:29:02.824795+0000","last_unstale":"2026-03-09T18:29:03.820010+0000","last_undegraded":"2026-03-09T18:29:03.820010+0000","last_fullsized":"2026-03-09T18:29:03.820010+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:47:00.539746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":52,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.685863+0000","last_change":"2026-03-09T18:28:58.959945+0000","last_active":"2026-03-09T18:29:04.685863+0000","last_peered":"2026-03-09T18:29:04.685863+0000","last_clean":"2026-03-09T18:29:04.685863+0000","last_became_active":"2026-03-09T18:28:58.959787+0000","last_became_peered":"2026-03-09T18:28:58.959787+0000","last_unstale":"2026-03-09T18:29:04.685863+0000","last_undegraded":"2026-03-09T18:29:04.685863+0000","last_fullsized":"2026-03-09T18:29:04.685863+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:47:47.176049+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799130+0000","last_change":"2026-03-09T18:28:56.742054+0000","last_active":"2026-03-09T18:29:03.799130+0000","last_peered":"2026-03-09T18:29:03.799130+0000","last_clean":"2026-03-09T18:29:03.799130+0000","last_became_active":"2026-03-09T18:28:56.741959+0000","last_became_peered":"2026-03-09T18:28:56.741959+0000","last_unstale":"2026-03-09T18:29:03.799130+0000","last_undegraded":"2026-03-09T18:29:03.799130+0000","last_fullsized":"2026-03-09T18:29:03.799130+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:09:32.302336+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798887+0000","last_change":"2026-03-09T18:29:00.806973+0000","last_active":"2026-03-09T18:29:03.798887+0000","last_peered":"2026-03-09T18:29:03.798887+0000","last_clean":"2026-03-09T18:29:03.798887+0000","last_became_active":"2026-03-09T18:29:00.806776+0000","last_became_peered":"2026-03-09T18:29:00.806776+0000","last_unstale":"2026-03-09T18:29:03.798887+0000","last_undegraded":"2026-03-09T18:29:03.798887+0000","last_fullsized":"2026-03-09T18:29:03.798887+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:09.973182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799115+0000","last_change":"2026-03-09T18:29:02.838553+0000","last_active":"2026-03-09T18:29:03.799115+0000","last_peered":"2026-03-09T18:29:03.799115+0000","last_clean":"2026-03-09T18:29:03.799115+0000","last_became_active":"2026-03-09T18:29:02.838400+0000","last_became_peered":"2026-03-09T18:29:02.838400+0000","last_unstale":"2026-03-09T18:29:03.799115+0000","last_undegraded":"2026-03-09T18:29:03.799115+0000","last_fullsized":"2026-03-09T18:29:03.799115+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:00:37.254278+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":44,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.389370+0000","last_change":"2026-03-09T18:28:58.768598+0000","last_active":"2026-03-09T18:29:04.389370+0000","last_peered":"2026-03-09T18:29:04.389370+0000","last_clean":"2026-03-09T18:29:04.389370+0000","last_became_active":"2026-03-09T18:28:58.767418+0000","last_became_peered":"2026-03-09T18:28:58.767418+0000","last_unstale":"2026-03-09T18:29:04.389370+0000","last_undegraded":"2026-03-09T18:29:04.389370+0000","last_fullsized":"2026-03-09T18:29:04.389370+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:16.961128+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799083+0000","last_change":"2026-03-09T18:28:56.745496+0000","last_active":"2026-03-09T18:29:03.799083+0000","last_peered":"2026-03-09T18:29:03.799083+0000","last_clean":"2026-03-09T18:29:03.799083+0000","last_became_active":"2026-03-09T18:28:56.744938+0000","last_became_peered":"2026-03-09T18:28:56.744938+0000","last_unstale":"2026-03-09T18:29:03.799083+0000","last_undegraded":"2026-03-09T18:29:03.799083+0000","last_fullsized":"2026-03-09T18:29:03.799083+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:06:38.340482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798797+0000","last_change":"2026-03-09T18:29:00.806668+0000","last_active":"2026-03-09T18:29:03.798797+0000","last_peered":"2026-03-09T18:29:03.798797+0000","last_clean":"2026-03-09T18:29:03.798797+0000","last_became_active":"2026-03-09T18:29:00.806474+0000","last_became_peered":"2026-03-09T18:29:00.806474+0000","last_unstale":"2026-03-09T18:29:03.798797+0000","last_undegraded":"2026-03-09T18:29:03.798797+0000","last_fullsized":"2026-03-09T18:29:03.798797+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:09:22.649859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816037+0000","last_change":"2026-03-09T18:29:02.835169+0000","last_active":"2026-03-09T18:29:03.816037+0000","last_peered":"2026-03-09T18:29:03.816037+0000","last_clean":"2026-03-09T18:29:03.816037+0000","last_became_active":"2026-03-09T18:29:02.833612+0000","last_became_peered":"2026-03-09T18:29:02.833612+0000","last_unstale":"2026-03-09T18:29:03.816037+0000","last_undegraded":"2026-03-09T18:29:03.816037+0000","last_fullsized":"2026-03-09T18:29:03.816037+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:17.473621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708709+0000","last_change":"2026-03-09T18:28:58.956362+0000","last_active":"2026-03-09T18:29:07.708709+0000","last_peered":"2026-03-09T18:29:07.708709+0000","last_clean":"2026-03-09T18:29:07.708709+0000","last_became_active":"2026-03-09T18:28:58.956174+0000","last_became_peered":"2026-03-09T18:28:58.956174+0000","last_unstale":"2026-03-09T18:29:07.708709+0000","last_undegraded":"2026-03-09T18:29:07.708709+0000","last_fullsized":"2026-03-09T18:29:07.708709+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:39:48.856610+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'2","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.031059+0000","last_change":"2026-03-09T18:28:56.746496+0000","last_active":"2026-03-09T18:29:04.031059+0000","last_peered":"2026-03-09T18:29:04.031059+0000","last_clean":"2026-03-09T18:29:04.031059+0000","last_became_active":"2026-03-09T18:28:56.746341+0000","last_became_peered":"2026-03-09T18:28:56.746341+0000","last_unstale":"2026-03-09T18:29:04.031059+0000","last_undegraded":"2026-03-09T18:29:04.031059+0000","last_fullsized":"2026-03-09T18:29:04.031059+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:16.071845+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.261483+0000","last_change":"2026-03-09T18:29:00.826374+0000","last_active":"2026-03-09T18:29:04.261483+0000","last_peered":"2026-03-09T18:29:04.261483+0000","last_clean":"2026-03-09T18:29:04.261483+0000","last_became_active":"2026-03-09T18:29:00.826226+0000","last_became_peered":"2026-03-09T18:29:00.826226+0000","last_unstale":"2026-03-09T18:29:04.261483+0000","last_undegraded":"2026-03-09T18:29:04.261483+0000","last_fullsized":"2026-03-09T18:29:04.261483+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:34:57.862238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708681+0000","last_change":"2026-03-09T18:29:02.809593+0000","last_active":"2026-03-09T18:29:07.708681+0000","last_peered":"2026-03-09T18:29:07.708681+0000","last_clean":"2026-03-09T18:29:07.708681+0000","last_became_active":"2026-03-09T18:29:02.809359+0000","last_became_peered":"2026-03-09T18:29:02.809359+0000","last_unstale":"2026-03-09T18:29:07.708681+0000","last_undegraded":"2026-03-09T18:29:07.708681+0000","last_fullsized":"2026-03-09T18:29:07.708681+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:54:58.267020+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839162+0000","last_change":"2026-03-09T18:28:56.747934+0000","last_active":"2026-03-09T18:29:03.839162+0000","last_peered":"2026-03-09T18:29:03.839162+0000","last_clean":"2026-03-09T18:29:03.839162+0000","last_became_active":"2026-03-09T18:28:56.747628+0000","last_became_peered":"2026-03-09T18:28:56.747628+0000","last_unstale":"2026-03-09T18:29:03.839162+0000","last_undegraded":"2026-03-09T18:29:03.839162+0000","last_fullsized":"2026-03-09T18:29:03.839162+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:00:12.102208+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":30,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.604890+0000","last_change":"2026-03-09T18:28:58.958060+0000","last_active":"2026-03-09T18:29:04.604890+0000","last_peered":"2026-03-09T18:29:04.604890+0000","last_clean":"2026-03-09T18:29:04.604890+0000","last_became_active":"2026-03-09T18:28:58.957623+0000","last_became_peered":"2026-03-09T18:28:58.957623+0000","last_unstale":"2026-03-09T18:29:04.604890+0000","last_undegraded":"2026-03-09T18:29:04.604890+0000","last_fullsized":"2026-03-09T18:29:04.604890+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:44:42.386720+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709224+0000","last_change":"2026-03-09T18:29:00.827277+0000","last_active":"2026-03-09T18:29:07.709224+0000","last_peered":"2026-03-09T18:29:07.709224+0000","last_clean":"2026-03-09T18:29:07.709224+0000","last_became_active":"2026-03-09T18:29:00.816344+0000","last_became_peered":"2026-03-09T18:29:00.816344+0000","last_unstale":"2026-03-09T18:29:07.709224+0000","last_undegraded":"2026-03-09T18:29:07.709224+0000","last_fullsized":"2026-03-09T18:29:07.709224+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:07:01.246586+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799473+0000","last_change":"2026-03-09T18:29:02.825834+0000","last_active":"2026-03-09T18:29:03.799473+0000","last_peered":"2026-03-09T18:29:03.799473+0000","last_clean":"2026-03-09T18:29:03.799473+0000","last_became_active":"2026-03-09T18:29:02.825125+0000","last_became_peered":"2026-03-09T18:29:02.825125+0000","last_unstale":"2026-03-09T18:29:03.799473+0000","last_undegraded":"2026-03-09T18:29:03.799473+0000","last_fullsized":"2026-03-09T18:29:03.799473+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:57:52.986881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.413864+0000","last_change":"2026-03-09T18:28:58.957319+0000","last_active":"2026-03-09T18:29:04.413864+0000","last_peered":"2026-03-09T18:29:04.413864+0000","last_clean":"2026-03-09T18:29:04.413864+0000","last_became_active":"2026-03-09T18:28:58.957081+0000","last_became_peered":"2026-03-09T18:28:58.957081+0000","last_unstale":"2026-03-09T18:29:04.413864+0000","last_undegraded":"2026-03-09T18:29:04.413864+0000","last_fullsized":"2026-03-09T18:29:04.413864+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:46:24.182293+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798965+0000","last_change":"2026-03-09T18:28:56.745571+0000","last_active":"2026-03-09T18:29:03.798965+0000","last_peered":"2026-03-09T18:29:03.798965+0000","last_clean":"2026-03-09T18:29:03.798965+0000","last_became_active":"2026-03-09T18:28:56.745107+0000","last_became_peered":"2026-03-09T18:28:56.745107+0000","last_unstale":"2026-03-09T18:29:03.798965+0000","last_undegraded":"2026-03-09T18:29:03.798965+0000","last_fullsized":"2026-03-09T18:29:03.798965+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:50:16.216033+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815956+0000","last_change":"2026-03-09T18:29:00.822208+0000","last_active":"2026-03-09T18:29:03.815956+0000","last_peered":"2026-03-09T18:29:03.815956+0000","last_clean":"2026-03-09T18:29:03.815956+0000","last_became_active":"2026-03-09T18:29:00.822072+0000","last_became_peered":"2026-03-09T18:29:00.822072+0000","last_unstale":"2026-03-09T18:29:03.815956+0000","last_undegraded":"2026-03-09T18:29:03.815956+0000","last_fullsized":"2026-03-09T18:29:03.815956+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:17:32.895635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798390+0000","last_change":"2026-03-09T18:29:02.823224+0000","last_active":"2026-03-09T18:29:03.798390+0000","last_peered":"2026-03-09T18:29:03.798390+0000","last_clean":"2026-03-09T18:29:03.798390+0000","last_became_active":"2026-03-09T18:29:02.823141+0000","last_became_peered":"2026-03-09T18:29:02.823141+0000","last_unstale":"2026-03-09T18:29:03.798390+0000","last_undegraded":"2026-03-09T18:29:03.798390+0000","last_fullsized":"2026-03-09T18:29:03.798390+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:30.376630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709357+0000","last_change":"2026-03-09T18:28:58.956321+0000","last_active":"2026-03-09T18:29:07.709357+0000","last_peered":"2026-03-09T18:29:07.709357+0000","last_clean":"2026-03-09T18:29:07.709357+0000","last_became_active":"2026-03-09T18:28:58.956124+0000","last_became_peered":"2026-03-09T18:28:58.956124+0000","last_unstale":"2026-03-09T18:29:07.709357+0000","last_undegraded":"2026-03-09T18:29:07.709357+0000","last_fullsized":"2026-03-09T18:29:07.709357+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:40:57.133733+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819562+0000","last_change":"2026-03-09T18:28:56.743533+0000","last_active":"2026-03-09T18:29:03.819562+0000","last_peered":"2026-03-09T18:29:03.819562+0000","last_clean":"2026-03-09T18:29:03.819562+0000","last_became_active":"2026-03-09T18:28:56.743367+0000","last_became_peered":"2026-03-09T18:28:56.743367+0000","last_unstale":"2026-03-09T18:29:03.819562+0000","last_undegraded":"2026-03-09T18:29:03.819562+0000","last_fullsized":"2026-03-09T18:29:03.819562+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:42.021880+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.259553+0000","last_change":"2026-03-09T18:29:00.822143+0000","last_active":"2026-03-09T18:29:04.259553+0000","last_peered":"2026-03-09T18:29:04.259553+0000","last_clean":"2026-03-09T18:29:04.259553+0000","last_became_active":"2026-03-09T18:29:00.821941+0000","last_became_peered":"2026-03-09T18:29:00.821941+0000","last_unstale":"2026-03-09T18:29:04.259553+0000","last_undegraded":"2026-03-09T18:29:04.259553+0000","last_fullsized":"2026-03-09T18:29:04.259553+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:45.771903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828334+0000","last_change":"2026-03-09T18:29:02.830170+0000","last_active":"2026-03-09T18:29:03.828334+0000","last_peered":"2026-03-09T18:29:03.828334+0000","last_clean":"2026-03-09T18:29:03.828334+0000","last_became_active":"2026-03-09T18:29:02.830002+0000","last_became_peered":"2026-03-09T18:29:02.830002+0000","last_unstale":"2026-03-09T18:29:03.828334+0000","last_undegraded":"2026-03-09T18:29:03.828334+0000","last_fullsized":"2026-03-09T18:29:03.828334+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:29:44.228857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.637164+0000","last_change":"2026-03-09T18:28:58.957812+0000","last_active":"2026-03-09T18:29:04.637164+0000","last_peered":"2026-03-09T18:29:04.637164+0000","last_clean":"2026-03-09T18:29:04.637164+0000","last_became_active":"2026-03-09T18:28:58.957000+0000","last_became_peered":"2026-03-09T18:28:58.957000+0000","last_unstale":"2026-03-09T18:29:04.637164+0000","last_undegraded":"2026-03-09T18:29:04.637164+0000","last_fullsized":"2026-03-09T18:29:04.637164+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:10:07.235258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798859+0000","last_change":"2026-03-09T18:28:56.745661+0000","last_active":"2026-03-09T18:29:03.798859+0000","last_peered":"2026-03-09T18:29:03.798859+0000","last_clean":"2026-03-09T18:29:03.798859+0000","last_became_active":"2026-03-09T18:28:56.745221+0000","last_became_peered":"2026-03-09T18:28:56.745221+0000","last_unstale":"2026-03-09T18:29:03.798859+0000","last_undegraded":"2026-03-09T18:29:03.798859+0000","last_fullsized":"2026-03-09T18:29:03.798859+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:43.697531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709538+0000","last_change":"2026-03-09T18:29:00.808285+0000","last_active":"2026-03-09T18:29:07.709538+0000","last_peered":"2026-03-09T18:29:07.709538+0000","last_clean":"2026-03-09T18:29:07.709538+0000","last_became_active":"2026-03-09T18:29:00.808157+0000","last_became_peered":"2026-03-09T18:29:00.808157+0000","last_unstale":"2026-03-09T18:29:07.709538+0000","last_undegraded":"2026-03-09T18:29:07.709538+0000","last_fullsized":"2026-03-09T18:29:07.709538+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:41.854119+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819115+0000","last_change":"2026-03-09T18:29:02.822590+0000","last_active":"2026-03-09T18:29:03.819115+0000","last_peered":"2026-03-09T18:29:03.819115+0000","last_clean":"2026-03-09T18:29:03.819115+0000","last_became_active":"2026-03-09T18:29:02.822402+0000","last_became_peered":"2026-03-09T18:29:02.822402+0000","last_unstale":"2026-03-09T18:29:03.819115+0000","last_undegraded":"2026-03-09T18:29:03.819115+0000","last_fullsized":"2026-03-09T18:29:03.819115+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:05:08.032760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.618810+0000","last_change":"2026-03-09T18:28:58.778550+0000","last_active":"2026-03-09T18:29:04.618810+0000","last_peered":"2026-03-09T18:29:04.618810+0000","last_clean":"2026-03-09T18:29:04.618810+0000","last_became_active":"2026-03-09T18:28:58.777795+0000","last_became_peered":"2026-03-09T18:28:58.777795+0000","last_unstale":"2026-03-09T18:29:04.618810+0000","last_undegraded":"2026-03-09T18:29:04.618810+0000","last_fullsized":"2026-03-09T18:29:04.618810+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:17:45.029074+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828862+0000","last_change":"2026-03-09T18:28:56.743372+0000","last_active":"2026-03-09T18:29:03.828862+0000","last_peered":"2026-03-09T18:29:03.828862+0000","last_clean":"2026-03-09T18:29:03.828862+0000","last_became_active":"2026-03-09T18:28:56.743215+0000","last_became_peered":"2026-03-09T18:28:56.743215+0000","last_unstale":"2026-03-09T18:29:03.828862+0000","last_undegraded":"2026-03-09T18:29:03.828862+0000","last_fullsized":"2026-03-09T18:29:03.828862+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:48:34.454253+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825841+0000","last_change":"2026-03-09T18:29:00.809493+0000","last_active":"2026-03-09T18:29:03.825841+0000","last_peered":"2026-03-09T18:29:03.825841+0000","last_clean":"2026-03-09T18:29:03.825841+0000","last_became_active":"2026-03-09T18:29:00.808643+0000","last_became_peered":"2026-03-09T18:29:00.808643+0000","last_unstale":"2026-03-09T18:29:03.825841+0000","last_undegraded":"2026-03-09T18:29:03.825841+0000","last_fullsized":"2026-03-09T18:29:03.825841+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:51:27.723354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815871+0000","last_change":"2026-03-09T18:29:02.835220+0000","last_active":"2026-03-09T18:29:03.815871+0000","last_peered":"2026-03-09T18:29:03.815871+0000","last_clean":"2026-03-09T18:29:03.815871+0000","last_became_active":"2026-03-09T18:29:02.833685+0000","last_became_peered":"2026-03-09T18:29:02.833685+0000","last_unstale":"2026-03-09T18:29:03.815871+0000","last_undegraded":"2026-03-09T18:29:03.815871+0000","last_fullsized":"2026-03-09T18:29:03.815871+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:27:57.103570+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.512951+0000","last_change":"2026-03-09T18:28:58.752216+0000","last_active":"2026-03-09T18:29:04.512951+0000","last_peered":"2026-03-09T18:29:04.512951+0000","last_clean":"2026-03-09T18:29:04.512951+0000","last_became_active":"2026-03-09T18:28:58.751968+0000","last_became_peered":"2026-03-09T18:28:58.751968+0000","last_unstale":"2026-03-09T18:29:04.512951+0000","last_undegraded":"2026-03-09T18:29:04.512951+0000","last_fullsized":"2026-03-09T18:29:04.512951+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:52:04.891923+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798823+0000","last_change":"2026-03-09T18:28:56.745753+0000","last_active":"2026-03-09T18:29:03.798823+0000","last_peered":"2026-03-09T18:29:03.798823+0000","last_clean":"2026-03-09T18:29:03.798823+0000","last_became_active":"2026-03-09T18:28:56.745336+0000","last_became_peered":"2026-03-09T18:28:56.745336+0000","last_unstale":"2026-03-09T18:29:03.798823+0000","last_undegraded":"2026-03-09T18:29:03.798823+0000","last_fullsized":"2026-03-09T18:29:03.798823+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:11:46.879409+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816703+0000","last_change":"2026-03-09T18:29:00.813351+0000","last_active":"2026-03-09T18:29:03.816703+0000","last_peered":"2026-03-09T18:29:03.816703+0000","last_clean":"2026-03-09T18:29:03.816703+0000","last_became_active":"2026-03-09T18:29:00.813251+0000","last_became_peered":"2026-03-09T18:29:00.813251+0000","last_unstale":"2026-03-09T18:29:03.816703+0000","last_undegraded":"2026-03-09T18:29:03.816703+0000","last_fullsized":"2026-03-09T18:29:03.816703+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:04.604899+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819013+0000","last_change":"2026-03-09T18:29:02.809194+0000","last_active":"2026-03-09T18:29:03.819013+0000","last_peered":"2026-03-09T18:29:03.819013+0000","last_clean":"2026-03-09T18:29:03.819013+0000","last_became_active":"2026-03-09T18:29:02.809055+0000","last_became_peered":"2026-03-09T18:29:02.809055+0000","last_unstale":"2026-03-09T18:29:03.819013+0000","last_undegraded":"2026-03-09T18:29:03.819013+0000","last_fullsized":"2026-03-09T18:29:03.819013+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:19:06.415313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.669722+0000","last_change":"2026-03-09T18:28:58.957983+0000","last_active":"2026-03-09T18:29:04.669722+0000","last_peered":"2026-03-09T18:29:04.669722+0000","last_clean":"2026-03-09T18:29:04.669722+0000","last_became_active":"2026-03-09T18:28:58.957447+0000","last_became_peered":"2026-03-09T18:28:58.957447+0000","last_unstale":"2026-03-09T18:29:04.669722+0000","last_undegraded":"2026-03-09T18:29:04.669722+0000","last_fullsized":"2026-03-09T18:29:04.669722+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:01:23.474882+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709004+0000","last_change":"2026-03-09T18:28:56.739190+0000","last_active":"2026-03-09T18:29:07.709004+0000","last_peered":"2026-03-09T18:29:07.709004+0000","last_clean":"2026-03-09T18:29:07.709004+0000","last_became_active":"2026-03-09T18:28:56.739067+0000","last_became_peered":"2026-03-09T18:28:56.739067+0000","last_unstale":"2026-03-09T18:29:07.709004+0000","last_undegraded":"2026-03-09T18:29:07.709004+0000","last_fullsized":"2026-03-09T18:29:07.709004+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:06:20.038676+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799354+0000","last_change":"2026-03-09T18:29:00.826314+0000","last_active":"2026-03-09T18:29:03.799354+0000","last_peered":"2026-03-09T18:29:03.799354+0000","last_clean":"2026-03-09T18:29:03.799354+0000","last_became_active":"2026-03-09T18:29:00.826115+0000","last_became_peered":"2026-03-09T18:29:00.826115+0000","last_unstale":"2026-03-09T18:29:03.799354+0000","last_undegraded":"2026-03-09T18:29:03.799354+0000","last_fullsized":"2026-03-09T18:29:03.799354+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:00:16.431852+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816922+0000","last_change":"2026-03-09T18:29:02.830265+0000","last_active":"2026-03-09T18:29:03.816922+0000","last_peered":"2026-03-09T18:29:03.816922+0000","last_clean":"2026-03-09T18:29:03.816922+0000","last_became_active":"2026-03-09T18:29:02.830123+0000","last_became_peered":"2026-03-09T18:29:02.830123+0000","last_unstale":"2026-03-09T18:29:03.816922+0000","last_undegraded":"2026-03-09T18:29:03.816922+0000","last_fullsized":"2026-03-09T18:29:03.816922+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:10:01.879835+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":27,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.394739+0000","last_change":"2026-03-09T18:28:58.762390+0000","last_active":"2026-03-09T18:29:04.394739+0000","last_peered":"2026-03-09T18:29:04.394739+0000","last_clean":"2026-03-09T18:29:04.394739+0000","last_became_active":"2026-03-09T18:28:58.762277+0000","last_became_peered":"2026-03-09T18:28:58.762277+0000","last_unstale":"2026-03-09T18:29:04.394739+0000","last_undegraded":"2026-03-09T18:29:04.394739+0000","last_fullsized":"2026-03-09T18:29:04.394739+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:42:54.836306+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819206+0000","last_change":"2026-03-09T18:28:56.742203+0000","last_active":"2026-03-09T18:29:03.819206+0000","last_peered":"2026-03-09T18:29:03.819206+0000","last_clean":"2026-03-09T18:29:03.819206+0000","last_became_active":"2026-03-09T18:28:56.742090+0000","last_became_peered":"2026-03-09T18:28:56.742090+0000","last_unstale":"2026-03-09T18:29:03.819206+0000","last_undegraded":"2026-03-09T18:29:03.819206+0000","last_fullsized":"2026-03-09T18:29:03.819206+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:11:06.825932+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839463+0000","last_change":"2026-03-09T18:29:00.817278+0000","last_active":"2026-03-09T18:29:03.839463+0000","last_peered":"2026-03-09T18:29:03.839463+0000","last_clean":"2026-03-09T18:29:03.839463+0000","last_became_active":"2026-03-09T18:29:00.817119+0000","last_became_peered":"2026-03-09T18:29:00.817119+0000","last_unstale":"2026-03-09T18:29:03.839463+0000","last_undegraded":"2026-03-09T18:29:03.839463+0000","last_fullsized":"2026-03-09T18:29:03.839463+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:25:36.253006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798399+0000","last_change":"2026-03-09T18:29:02.824849+0000","last_active":"2026-03-09T18:29:03.798399+0000","last_peered":"2026-03-09T18:29:03.798399+0000","last_clean":"2026-03-09T18:29:03.798399+0000","last_became_active":"2026-03-09T18:29:02.824578+0000","last_became_peered":"2026-03-09T18:29:02.824578+0000","last_unstale":"2026-03-09T18:29:03.798399+0000","last_undegraded":"2026-03-09T18:29:03.798399+0000","last_fullsized":"2026-03-09T18:29:03.798399+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:47:42.094995+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826613+0000","last_change":"2026-03-09T18:29:02.825236+0000","last_active":"2026-03-09T18:29:03.826613+0000","last_peered":"2026-03-09T18:29:03.826613+0000","last_clean":"2026-03-09T18:29:03.826613+0000","last_became_active":"2026-03-09T18:29:02.824740+0000","last_became_peered":"2026-03-09T18:29:02.824740+0000","last_unstale":"2026-03-09T18:29:03.826613+0000","last_undegraded":"2026-03-09T18:29:03.826613+0000","last_fullsized":"2026-03-09T18:29:03.826613+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:09:19.985063+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816471+0000","last_change":"2026-03-09T18:28:56.739495+0000","last_active":"2026-03-09T18:29:03.816471+0000","last_peered":"2026-03-09T18:29:03.816471+0000","last_clean":"2026-03-09T18:29:03.816471+0000","last_became_active":"2026-03-09T18:28:56.739397+0000","last_became_peered":"2026-03-09T18:28:56.739397+0000","last_unstale":"2026-03-09T18:29:03.816471+0000","last_undegraded":"2026-03-09T18:29:03.816471+0000","last_fullsized":"2026-03-09T18:29:03.816471+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:22:08.981940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.672027+0000","last_change":"2026-03-09T18:28:58.759750+0000","last_active":"2026-03-09T18:29:04.672027+0000","last_peered":"2026-03-09T18:29:04.672027+0000","last_clean":"2026-03-09T18:29:04.672027+0000","last_became_active":"2026-03-09T18:28:58.759643+0000","last_became_peered":"2026-03-09T18:28:58.759643+0000","last_unstale":"2026-03-09T18:29:04.672027+0000","last_undegraded":"2026-03-09T18:29:04.672027+0000","last_fullsized":"2026-03-09T18:29:04.672027+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:31:10.636369+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819691+0000","last_change":"2026-03-09T18:29:00.820327+0000","last_active":"2026-03-09T18:29:03.819691+0000","last_peered":"2026-03-09T18:29:03.819691+0000","last_clean":"2026-03-09T18:29:03.819691+0000","last_became_active":"2026-03-09T18:29:00.820189+0000","last_became_peered":"2026-03-09T18:29:00.820189+0000","last_unstale":"2026-03-09T18:29:03.819691+0000","last_undegraded":"2026-03-09T18:29:03.819691+0000","last_fullsized":"2026-03-09T18:29:03.819691+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:15:24.166473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593733,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27880,"kb_used_data":1048,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939544,"statfs":{"total":21470642176,"available":21442093056,"internally_reserved":0,"allocated":1073152,"data_stored":705250,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27860,"kb_used_data":1024,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939564,"statfs":{"total":21470642176,"available":21442113536,"internally_reserved":0,"allocated":1048576,"data_stored":704152,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[0,0,0,0,0,2],"upper_bound":64},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920778,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27424,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940000,"statfs":{"total":21470642176,"available":21442560000,"internally_reserved":0,"allocated":602112,"data_stored":252587,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":4,"up_from":27,"seq":115964117004,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27448,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939976,"statfs":{"total":21470642176,"available":21442535424,"internally_reserved":0,"allocated":626688,"data_stored":246919,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27460,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939964,"statfs":{"total":21470642176,"available":21442523136,"internally_reserved":0,"allocated":634880,"data_stored":247205,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476751,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27400,"kb_used_data":568,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940024,"statfs":{"total":21470642176,"available":21442584576,"internally_reserved":0,"allocated":581632,"data_stored":245020,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607569,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":651264,"data_stored":246657,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":5,"apply_latency_ms":5,"commit_latency_ns":5000000,"apply_latency_ns":5000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738387,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27892,"kb_used_data":1056,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939532,"statfs":{"total":21470642176,"available":21442080768,"internally_reserved":0,"allocated":1081344,"data_stored":706375,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1613,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T18:29:09.545 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph pg dump --format=json 2026-03-09T18:29:09.772 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:10.014 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:10.017 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[51427]: pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 188 op/s 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1829599515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2656405828' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/413118903' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[57581]: pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 188 op/s 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1829599515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2656405828' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:29:10.063 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/413118903' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:29:10.064 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":110,"stamp":"2026-03-09T18:29:08.709144+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":776,"num_read_kb":519,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":392,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220836,"kb_used_data":6152,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518556,"statfs":{"total":171765137408,"available":171539001344,"internally_reserved":0,"allocated":6299648,"data_stored":3354165,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[0,0,0,0,0,2],"upper_bound":64},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4364,"num_objects":182,"num_object_clones":0,"num_object_copies":546,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":182,"num_whiteouts":0,"num_read":709,"num_read_kb":465,"num_write":424,"num_write_kb":37,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"6.001338"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819438+0000","last_change":"2026-03-09T18:28:56.743478+0000","last_active":"2026-03-09T18:29:03.819438+0000","last_peered":"2026-03-09T18:29:03.819438+0000","last_clean":"2026-03-09T18:29:03.819438+0000","last_became_active":"2026-03-09T18:28:56.743256+0000","last_became_peered":"2026-03-09T18:28:56.743256+0000","last_unstale":"2026-03-09T18:29:03.819438+0000","last_undegraded":"2026-03-09T18:29:03.819438+0000","last_fullsized":"2026-03-09T18:29:03.819438+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:56:49.823169+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.314304+0000","last_change":"2026-03-09T18:28:58.778397+0000","last_active":"2026-03-09T18:29:04.314304+0000","last_peered":"2026-03-09T18:29:04.314304+0000","last_clean":"2026-03-09T18:29:04.314304+0000","last_became_active":"2026-03-09T18:28:58.778104+0000","last_became_peered":"2026-03-09T18:28:58.778104+0000","last_unstale":"2026-03-09T18:29:04.314304+0000","last_undegraded":"2026-03-09T18:29:04.314304+0000","last_fullsized":"2026-03-09T18:29:04.314304+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:50:15.933694+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825961+0000","last_change":"2026-03-09T18:29:00.809312+0000","last_active":"2026-03-09T18:29:03.825961+0000","last_peered":"2026-03-09T18:29:03.825961+0000","last_clean":"2026-03-09T18:29:03.825961+0000","last_became_active":"2026-03-09T18:29:00.809035+0000","last_became_peered":"2026-03-09T18:29:00.809035+0000","last_unstale":"2026-03-09T18:29:03.825961+0000","last_undegraded":"2026-03-09T18:29:03.825961+0000","last_fullsized":"2026-03-09T18:29:03.825961+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:07:55.782886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829422+0000","last_change":"2026-03-09T18:29:02.831406+0000","last_active":"2026-03-09T18:29:03.829422+0000","last_peered":"2026-03-09T18:29:03.829422+0000","last_clean":"2026-03-09T18:29:03.829422+0000","last_became_active":"2026-03-09T18:29:02.830785+0000","last_became_peered":"2026-03-09T18:29:02.830785+0000","last_unstale":"2026-03-09T18:29:03.829422+0000","last_undegraded":"2026-03-09T18:29:03.829422+0000","last_fullsized":"2026-03-09T18:29:03.829422+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:39:02.176302+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816194+0000","last_change":"2026-03-09T18:29:02.826043+0000","last_active":"2026-03-09T18:29:03.816194+0000","last_peered":"2026-03-09T18:29:03.816194+0000","last_clean":"2026-03-09T18:29:03.816194+0000","last_became_active":"2026-03-09T18:29:02.825906+0000","last_became_peered":"2026-03-09T18:29:02.825906+0000","last_unstale":"2026-03-09T18:29:03.816194+0000","last_undegraded":"2026-03-09T18:29:03.816194+0000","last_fullsized":"2026-03-09T18:29:03.816194+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:32:52.061487+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816221+0000","last_change":"2026-03-09T18:28:56.744170+0000","last_active":"2026-03-09T18:29:03.816221+0000","last_peered":"2026-03-09T18:29:03.816221+0000","last_clean":"2026-03-09T18:29:03.816221+0000","last_became_active":"2026-03-09T18:28:56.742551+0000","last_became_peered":"2026-03-09T18:28:56.742551+0000","last_unstale":"2026-03-09T18:29:03.816221+0000","last_undegraded":"2026-03-09T18:29:03.816221+0000","last_fullsized":"2026-03-09T18:29:03.816221+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:45.789965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.677236+0000","last_change":"2026-03-09T18:28:58.753837+0000","last_active":"2026-03-09T18:29:04.677236+0000","last_peered":"2026-03-09T18:29:04.677236+0000","last_clean":"2026-03-09T18:29:04.677236+0000","last_became_active":"2026-03-09T18:28:58.753718+0000","last_became_peered":"2026-03-09T18:28:58.753718+0000","last_unstale":"2026-03-09T18:29:04.677236+0000","last_undegraded":"2026-03-09T18:29:04.677236+0000","last_fullsized":"2026-03-09T18:29:04.677236+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:26:07.802012+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828417+0000","last_change":"2026-03-09T18:29:00.809301+0000","last_active":"2026-03-09T18:29:03.828417+0000","last_peered":"2026-03-09T18:29:03.828417+0000","last_clean":"2026-03-09T18:29:03.828417+0000","last_became_active":"2026-03-09T18:29:00.809202+0000","last_became_peered":"2026-03-09T18:29:00.809202+0000","last_unstale":"2026-03-09T18:29:03.828417+0000","last_undegraded":"2026-03-09T18:29:03.828417+0000","last_fullsized":"2026-03-09T18:29:03.828417+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:23.808124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708802+0000","last_change":"2026-03-09T18:28:56.741149+0000","last_active":"2026-03-09T18:29:07.708802+0000","last_peered":"2026-03-09T18:29:07.708802+0000","last_clean":"2026-03-09T18:29:07.708802+0000","last_became_active":"2026-03-09T18:28:56.741074+0000","last_became_peered":"2026-03-09T18:28:56.741074+0000","last_unstale":"2026-03-09T18:29:07.708802+0000","last_undegraded":"2026-03-09T18:29:07.708802+0000","last_fullsized":"2026-03-09T18:29:07.708802+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:43:45.473904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.521615+0000","last_change":"2026-03-09T18:28:58.752473+0000","last_active":"2026-03-09T18:29:04.521615+0000","last_peered":"2026-03-09T18:29:04.521615+0000","last_clean":"2026-03-09T18:29:04.521615+0000","last_became_active":"2026-03-09T18:28:58.752379+0000","last_became_peered":"2026-03-09T18:28:58.752379+0000","last_unstale":"2026-03-09T18:29:04.521615+0000","last_undegraded":"2026-03-09T18:29:04.521615+0000","last_fullsized":"2026-03-09T18:29:04.521615+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:55.685837+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708828+0000","last_change":"2026-03-09T18:29:00.816852+0000","last_active":"2026-03-09T18:29:07.708828+0000","last_peered":"2026-03-09T18:29:07.708828+0000","last_clean":"2026-03-09T18:29:07.708828+0000","last_became_active":"2026-03-09T18:29:00.809225+0000","last_became_peered":"2026-03-09T18:29:00.809225+0000","last_unstale":"2026-03-09T18:29:07.708828+0000","last_undegraded":"2026-03-09T18:29:07.708828+0000","last_fullsized":"2026-03-09T18:29:07.708828+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:44.330970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819774+0000","last_change":"2026-03-09T18:29:02.825582+0000","last_active":"2026-03-09T18:29:03.819774+0000","last_peered":"2026-03-09T18:29:03.819774+0000","last_clean":"2026-03-09T18:29:03.819774+0000","last_became_active":"2026-03-09T18:29:02.825489+0000","last_became_peered":"2026-03-09T18:29:02.825489+0000","last_unstale":"2026-03-09T18:29:03.819774+0000","last_undegraded":"2026-03-09T18:29:03.819774+0000","last_fullsized":"2026-03-09T18:29:03.819774+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:39:29.679281+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708934+0000","last_change":"2026-03-09T18:28:56.736006+0000","last_active":"2026-03-09T18:29:07.708934+0000","last_peered":"2026-03-09T18:29:07.708934+0000","last_clean":"2026-03-09T18:29:07.708934+0000","last_became_active":"2026-03-09T18:28:56.735874+0000","last_became_peered":"2026-03-09T18:28:56.735874+0000","last_unstale":"2026-03-09T18:29:07.708934+0000","last_undegraded":"2026-03-09T18:29:07.708934+0000","last_fullsized":"2026-03-09T18:29:07.708934+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:18:33.143247+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.492396+0000","last_change":"2026-03-09T18:28:58.768364+0000","last_active":"2026-03-09T18:29:04.492396+0000","last_peered":"2026-03-09T18:29:04.492396+0000","last_clean":"2026-03-09T18:29:04.492396+0000","last_became_active":"2026-03-09T18:28:58.766879+0000","last_became_peered":"2026-03-09T18:28:58.766879+0000","last_unstale":"2026-03-09T18:29:04.492396+0000","last_undegraded":"2026-03-09T18:29:04.492396+0000","last_fullsized":"2026-03-09T18:29:04.492396+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:08:46.335523+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798699+0000","last_change":"2026-03-09T18:29:00.808284+0000","last_active":"2026-03-09T18:29:03.798699+0000","last_peered":"2026-03-09T18:29:03.798699+0000","last_clean":"2026-03-09T18:29:03.798699+0000","last_became_active":"2026-03-09T18:29:00.808181+0000","last_became_peered":"2026-03-09T18:29:00.808181+0000","last_unstale":"2026-03-09T18:29:03.798699+0000","last_undegraded":"2026-03-09T18:29:03.798699+0000","last_fullsized":"2026-03-09T18:29:03.798699+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:09:43.517918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708909+0000","last_change":"2026-03-09T18:29:02.836031+0000","last_active":"2026-03-09T18:29:07.708909+0000","last_peered":"2026-03-09T18:29:07.708909+0000","last_clean":"2026-03-09T18:29:07.708909+0000","last_became_active":"2026-03-09T18:29:02.835445+0000","last_became_peered":"2026-03-09T18:29:02.835445+0000","last_unstale":"2026-03-09T18:29:07.708909+0000","last_undegraded":"2026-03-09T18:29:07.708909+0000","last_fullsized":"2026-03-09T18:29:07.708909+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:29.959920+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829361+0000","last_change":"2026-03-09T18:29:02.831338+0000","last_active":"2026-03-09T18:29:03.829361+0000","last_peered":"2026-03-09T18:29:03.829361+0000","last_clean":"2026-03-09T18:29:03.829361+0000","last_became_active":"2026-03-09T18:29:02.829892+0000","last_became_peered":"2026-03-09T18:29:02.829892+0000","last_unstale":"2026-03-09T18:29:03.829361+0000","last_undegraded":"2026-03-09T18:29:03.829361+0000","last_fullsized":"2026-03-09T18:29:03.829361+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:05:41.692572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819175+0000","last_change":"2026-03-09T18:28:56.751172+0000","last_active":"2026-03-09T18:29:03.819175+0000","last_peered":"2026-03-09T18:29:03.819175+0000","last_clean":"2026-03-09T18:29:03.819175+0000","last_became_active":"2026-03-09T18:28:56.750897+0000","last_became_peered":"2026-03-09T18:28:56.750897+0000","last_unstale":"2026-03-09T18:29:03.819175+0000","last_undegraded":"2026-03-09T18:29:03.819175+0000","last_fullsized":"2026-03-09T18:29:03.819175+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:43:10.905138+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.569015+0000","last_change":"2026-03-09T18:28:58.755586+0000","last_active":"2026-03-09T18:29:04.569015+0000","last_peered":"2026-03-09T18:29:04.569015+0000","last_clean":"2026-03-09T18:29:04.569015+0000","last_became_active":"2026-03-09T18:28:58.755350+0000","last_became_peered":"2026-03-09T18:28:58.755350+0000","last_unstale":"2026-03-09T18:29:04.569015+0000","last_undegraded":"2026-03-09T18:29:04.569015+0000","last_fullsized":"2026-03-09T18:29:04.569015+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:17:38.599026+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825741+0000","last_change":"2026-03-09T18:29:00.809132+0000","last_active":"2026-03-09T18:29:03.825741+0000","last_peered":"2026-03-09T18:29:03.825741+0000","last_clean":"2026-03-09T18:29:03.825741+0000","last_became_active":"2026-03-09T18:29:00.808854+0000","last_became_peered":"2026-03-09T18:29:00.808854+0000","last_unstale":"2026-03-09T18:29:03.825741+0000","last_undegraded":"2026-03-09T18:29:03.825741+0000","last_fullsized":"2026-03-09T18:29:03.825741+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:53:09.736568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.817063+0000","last_change":"2026-03-09T18:29:02.831100+0000","last_active":"2026-03-09T18:29:03.817063+0000","last_peered":"2026-03-09T18:29:03.817063+0000","last_clean":"2026-03-09T18:29:03.817063+0000","last_became_active":"2026-03-09T18:29:02.831000+0000","last_became_peered":"2026-03-09T18:29:02.831000+0000","last_unstale":"2026-03-09T18:29:03.817063+0000","last_undegraded":"2026-03-09T18:29:03.817063+0000","last_fullsized":"2026-03-09T18:29:03.817063+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:48:40.461585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828823+0000","last_change":"2026-03-09T18:28:56.743538+0000","last_active":"2026-03-09T18:29:03.828823+0000","last_peered":"2026-03-09T18:29:03.828823+0000","last_clean":"2026-03-09T18:29:03.828823+0000","last_became_active":"2026-03-09T18:28:56.737887+0000","last_became_peered":"2026-03-09T18:28:56.737887+0000","last_unstale":"2026-03-09T18:29:03.828823+0000","last_undegraded":"2026-03-09T18:29:03.828823+0000","last_fullsized":"2026-03-09T18:29:03.828823+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:12.364439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":44,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.680200+0000","last_change":"2026-03-09T18:28:58.763714+0000","last_active":"2026-03-09T18:29:04.680200+0000","last_peered":"2026-03-09T18:29:04.680200+0000","last_clean":"2026-03-09T18:29:04.680200+0000","last_became_active":"2026-03-09T18:28:58.763113+0000","last_became_peered":"2026-03-09T18:28:58.763113+0000","last_unstale":"2026-03-09T18:29:04.680200+0000","last_undegraded":"2026-03-09T18:29:04.680200+0000","last_fullsized":"2026-03-09T18:29:04.680200+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:02.312889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828799+0000","last_change":"2026-03-09T18:29:00.811578+0000","last_active":"2026-03-09T18:29:03.828799+0000","last_peered":"2026-03-09T18:29:03.828799+0000","last_clean":"2026-03-09T18:29:03.828799+0000","last_became_active":"2026-03-09T18:29:00.811434+0000","last_became_peered":"2026-03-09T18:29:00.811434+0000","last_unstale":"2026-03-09T18:29:03.828799+0000","last_undegraded":"2026-03-09T18:29:03.828799+0000","last_fullsized":"2026-03-09T18:29:03.828799+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:47:02.563622+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799582+0000","last_change":"2026-03-09T18:29:02.831962+0000","last_active":"2026-03-09T18:29:03.799582+0000","last_peered":"2026-03-09T18:29:03.799582+0000","last_clean":"2026-03-09T18:29:03.799582+0000","last_became_active":"2026-03-09T18:29:02.831624+0000","last_became_peered":"2026-03-09T18:29:02.831624+0000","last_unstale":"2026-03-09T18:29:03.799582+0000","last_undegraded":"2026-03-09T18:29:03.799582+0000","last_fullsized":"2026-03-09T18:29:03.799582+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:47.388332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826476+0000","last_change":"2026-03-09T18:28:56.748709+0000","last_active":"2026-03-09T18:29:03.826476+0000","last_peered":"2026-03-09T18:29:03.826476+0000","last_clean":"2026-03-09T18:29:03.826476+0000","last_became_active":"2026-03-09T18:28:56.745198+0000","last_became_peered":"2026-03-09T18:28:56.745198+0000","last_unstale":"2026-03-09T18:29:03.826476+0000","last_undegraded":"2026-03-09T18:29:03.826476+0000","last_fullsized":"2026-03-09T18:29:03.826476+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:51.347690+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.664205+0000","last_change":"2026-03-09T18:28:58.958989+0000","last_active":"2026-03-09T18:29:04.664205+0000","last_peered":"2026-03-09T18:29:04.664205+0000","last_clean":"2026-03-09T18:29:04.664205+0000","last_became_active":"2026-03-09T18:28:58.958622+0000","last_became_peered":"2026-03-09T18:28:58.958622+0000","last_unstale":"2026-03-09T18:29:04.664205+0000","last_undegraded":"2026-03-09T18:29:04.664205+0000","last_fullsized":"2026-03-09T18:29:04.664205+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:21:27.651666+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.238902+0000","last_change":"2026-03-09T18:29:00.809873+0000","last_active":"2026-03-09T18:29:04.238902+0000","last_peered":"2026-03-09T18:29:04.238902+0000","last_clean":"2026-03-09T18:29:04.238902+0000","last_became_active":"2026-03-09T18:29:00.809701+0000","last_became_peered":"2026-03-09T18:29:00.809701+0000","last_unstale":"2026-03-09T18:29:04.238902+0000","last_undegraded":"2026-03-09T18:29:04.238902+0000","last_fullsized":"2026-03-09T18:29:04.238902+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:08:06.847515+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.688629+0000","last_change":"2026-03-09T18:28:58.756197+0000","last_active":"2026-03-09T18:29:04.688629+0000","last_peered":"2026-03-09T18:29:04.688629+0000","last_clean":"2026-03-09T18:29:04.688629+0000","last_became_active":"2026-03-09T18:28:58.756004+0000","last_became_peered":"2026-03-09T18:28:58.756004+0000","last_unstale":"2026-03-09T18:29:04.688629+0000","last_undegraded":"2026-03-09T18:29:04.688629+0000","last_fullsized":"2026-03-09T18:29:04.688629+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:30:00.619326+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816364+0000","last_change":"2026-03-09T18:28:56.741491+0000","last_active":"2026-03-09T18:29:03.816364+0000","last_peered":"2026-03-09T18:29:03.816364+0000","last_clean":"2026-03-09T18:29:03.816364+0000","last_became_active":"2026-03-09T18:28:56.741240+0000","last_became_peered":"2026-03-09T18:28:56.741240+0000","last_unstale":"2026-03-09T18:29:03.816364+0000","last_undegraded":"2026-03-09T18:29:03.816364+0000","last_fullsized":"2026-03-09T18:29:03.816364+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:50:33.350549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.252607+0000","last_change":"2026-03-09T18:29:00.824971+0000","last_active":"2026-03-09T18:29:04.252607+0000","last_peered":"2026-03-09T18:29:04.252607+0000","last_clean":"2026-03-09T18:29:04.252607+0000","last_became_active":"2026-03-09T18:29:00.824848+0000","last_became_peered":"2026-03-09T18:29:00.824848+0000","last_unstale":"2026-03-09T18:29:04.252607+0000","last_undegraded":"2026-03-09T18:29:04.252607+0000","last_fullsized":"2026-03-09T18:29:04.252607+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:07:14.583289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708576+0000","last_change":"2026-03-09T18:29:02.809464+0000","last_active":"2026-03-09T18:29:07.708576+0000","last_peered":"2026-03-09T18:29:07.708576+0000","last_clean":"2026-03-09T18:29:07.708576+0000","last_became_active":"2026-03-09T18:29:02.809327+0000","last_became_peered":"2026-03-09T18:29:02.809327+0000","last_unstale":"2026-03-09T18:29:07.708576+0000","last_undegraded":"2026-03-09T18:29:07.708576+0000","last_fullsized":"2026-03-09T18:29:07.708576+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:57.044992+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":53,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.609963+0000","last_change":"2026-03-09T18:28:58.957906+0000","last_active":"2026-03-09T18:29:04.609963+0000","last_peered":"2026-03-09T18:29:04.609963+0000","last_clean":"2026-03-09T18:29:04.609963+0000","last_became_active":"2026-03-09T18:28:58.957209+0000","last_became_peered":"2026-03-09T18:28:58.957209+0000","last_unstale":"2026-03-09T18:29:04.609963+0000","last_undegraded":"2026-03-09T18:29:04.609963+0000","last_fullsized":"2026-03-09T18:29:04.609963+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:20:43.011755+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816423+0000","last_change":"2026-03-09T18:28:56.741679+0000","last_active":"2026-03-09T18:29:03.816423+0000","last_peered":"2026-03-09T18:29:03.816423+0000","last_clean":"2026-03-09T18:29:03.816423+0000","last_became_active":"2026-03-09T18:28:56.741410+0000","last_became_peered":"2026-03-09T18:28:56.741410+0000","last_unstale":"2026-03-09T18:29:03.816423+0000","last_undegraded":"2026-03-09T18:29:03.816423+0000","last_fullsized":"2026-03-09T18:29:03.816423+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:45:34.523525+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828481+0000","last_change":"2026-03-09T18:29:00.823863+0000","last_active":"2026-03-09T18:29:03.828481+0000","last_peered":"2026-03-09T18:29:03.828481+0000","last_clean":"2026-03-09T18:29:03.828481+0000","last_became_active":"2026-03-09T18:29:00.823760+0000","last_became_peered":"2026-03-09T18:29:00.823760+0000","last_unstale":"2026-03-09T18:29:03.828481+0000","last_undegraded":"2026-03-09T18:29:03.828481+0000","last_fullsized":"2026-03-09T18:29:03.828481+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:47:29.246968+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828456+0000","last_change":"2026-03-09T18:29:02.824568+0000","last_active":"2026-03-09T18:29:03.828456+0000","last_peered":"2026-03-09T18:29:03.828456+0000","last_clean":"2026-03-09T18:29:03.828456+0000","last_became_active":"2026-03-09T18:29:02.824449+0000","last_became_peered":"2026-03-09T18:29:02.824449+0000","last_unstale":"2026-03-09T18:29:03.828456+0000","last_undegraded":"2026-03-09T18:29:03.828456+0000","last_fullsized":"2026-03-09T18:29:03.828456+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:06:04.824142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":42,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.666861+0000","last_change":"2026-03-09T18:28:58.753916+0000","last_active":"2026-03-09T18:29:04.666861+0000","last_peered":"2026-03-09T18:29:04.666861+0000","last_clean":"2026-03-09T18:29:04.666861+0000","last_became_active":"2026-03-09T18:28:58.751911+0000","last_became_peered":"2026-03-09T18:28:58.751911+0000","last_unstale":"2026-03-09T18:29:04.666861+0000","last_undegraded":"2026-03-09T18:29:04.666861+0000","last_fullsized":"2026-03-09T18:29:04.666861+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:58:47.607740+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819237+0000","last_change":"2026-03-09T18:28:56.751235+0000","last_active":"2026-03-09T18:29:03.819237+0000","last_peered":"2026-03-09T18:29:03.819237+0000","last_clean":"2026-03-09T18:29:03.819237+0000","last_became_active":"2026-03-09T18:28:56.751007+0000","last_became_peered":"2026-03-09T18:28:56.751007+0000","last_unstale":"2026-03-09T18:29:03.819237+0000","last_undegraded":"2026-03-09T18:29:03.819237+0000","last_fullsized":"2026-03-09T18:29:03.819237+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:40:10.311698+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.248547+0000","last_change":"2026-03-09T18:29:00.811306+0000","last_active":"2026-03-09T18:29:04.248547+0000","last_peered":"2026-03-09T18:29:04.248547+0000","last_clean":"2026-03-09T18:29:04.248547+0000","last_became_active":"2026-03-09T18:29:00.811178+0000","last_became_peered":"2026-03-09T18:29:00.811178+0000","last_unstale":"2026-03-09T18:29:04.248547+0000","last_undegraded":"2026-03-09T18:29:04.248547+0000","last_fullsized":"2026-03-09T18:29:04.248547+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:32:03.352840+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798670+0000","last_change":"2026-03-09T18:29:02.824755+0000","last_active":"2026-03-09T18:29:03.798670+0000","last_peered":"2026-03-09T18:29:03.798670+0000","last_clean":"2026-03-09T18:29:03.798670+0000","last_became_active":"2026-03-09T18:29:02.824431+0000","last_became_peered":"2026-03-09T18:29:02.824431+0000","last_unstale":"2026-03-09T18:29:03.798670+0000","last_undegraded":"2026-03-09T18:29:03.798670+0000","last_fullsized":"2026-03-09T18:29:03.798670+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:23.952912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.643431+0000","last_change":"2026-03-09T18:28:58.752273+0000","last_active":"2026-03-09T18:29:04.643431+0000","last_peered":"2026-03-09T18:29:04.643431+0000","last_clean":"2026-03-09T18:29:04.643431+0000","last_became_active":"2026-03-09T18:28:58.752071+0000","last_became_peered":"2026-03-09T18:28:58.752071+0000","last_unstale":"2026-03-09T18:29:04.643431+0000","last_undegraded":"2026-03-09T18:29:04.643431+0000","last_fullsized":"2026-03-09T18:29:04.643431+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:50:32.470344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709268+0000","last_change":"2026-03-09T18:28:56.738083+0000","last_active":"2026-03-09T18:29:07.709268+0000","last_peered":"2026-03-09T18:29:07.709268+0000","last_clean":"2026-03-09T18:29:07.709268+0000","last_became_active":"2026-03-09T18:28:56.737982+0000","last_became_peered":"2026-03-09T18:28:56.737982+0000","last_unstale":"2026-03-09T18:29:07.709268+0000","last_undegraded":"2026-03-09T18:29:07.709268+0000","last_fullsized":"2026-03-09T18:29:07.709268+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:06:33.257242+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":27,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.255852+0000","last_change":"2026-03-09T18:29:00.824638+0000","last_active":"2026-03-09T18:29:04.255852+0000","last_peered":"2026-03-09T18:29:04.255852+0000","last_clean":"2026-03-09T18:29:04.255852+0000","last_became_active":"2026-03-09T18:29:00.824547+0000","last_became_peered":"2026-03-09T18:29:00.824547+0000","last_unstale":"2026-03-09T18:29:04.255852+0000","last_undegraded":"2026-03-09T18:29:04.255852+0000","last_fullsized":"2026-03-09T18:29:04.255852+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:30:50.072164+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.820142+0000","last_change":"2026-03-09T18:29:02.822449+0000","last_active":"2026-03-09T18:29:03.820142+0000","last_peered":"2026-03-09T18:29:03.820142+0000","last_clean":"2026-03-09T18:29:03.820142+0000","last_became_active":"2026-03-09T18:29:02.822218+0000","last_became_peered":"2026-03-09T18:29:02.822218+0000","last_unstale":"2026-03-09T18:29:03.820142+0000","last_undegraded":"2026-03-09T18:29:03.820142+0000","last_fullsized":"2026-03-09T18:29:03.820142+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:24:57.749688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":57,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.632583+0000","last_change":"2026-03-09T18:28:58.959129+0000","last_active":"2026-03-09T18:29:04.632583+0000","last_peered":"2026-03-09T18:29:04.632583+0000","last_clean":"2026-03-09T18:29:04.632583+0000","last_became_active":"2026-03-09T18:28:58.958838+0000","last_became_peered":"2026-03-09T18:28:58.958838+0000","last_unstale":"2026-03-09T18:29:04.632583+0000","last_undegraded":"2026-03-09T18:29:04.632583+0000","last_fullsized":"2026-03-09T18:29:04.632583+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:23:56.655830+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826370+0000","last_change":"2026-03-09T18:28:56.739172+0000","last_active":"2026-03-09T18:29:03.826370+0000","last_peered":"2026-03-09T18:29:03.826370+0000","last_clean":"2026-03-09T18:29:03.826370+0000","last_became_active":"2026-03-09T18:28:56.739065+0000","last_became_peered":"2026-03-09T18:28:56.739065+0000","last_unstale":"2026-03-09T18:29:03.826370+0000","last_undegraded":"2026-03-09T18:29:03.826370+0000","last_fullsized":"2026-03-09T18:29:03.826370+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:19:47.432313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839781+0000","last_change":"2026-03-09T18:29:00.806727+0000","last_active":"2026-03-09T18:29:03.839781+0000","last_peered":"2026-03-09T18:29:03.839781+0000","last_clean":"2026-03-09T18:29:03.839781+0000","last_became_active":"2026-03-09T18:29:00.806617+0000","last_became_peered":"2026-03-09T18:29:00.806617+0000","last_unstale":"2026-03-09T18:29:03.839781+0000","last_undegraded":"2026-03-09T18:29:03.839781+0000","last_fullsized":"2026-03-09T18:29:03.839781+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:16:07.812168+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826353+0000","last_change":"2026-03-09T18:29:02.807483+0000","last_active":"2026-03-09T18:29:03.826353+0000","last_peered":"2026-03-09T18:29:03.826353+0000","last_clean":"2026-03-09T18:29:03.826353+0000","last_became_active":"2026-03-09T18:29:02.807412+0000","last_became_peered":"2026-03-09T18:29:02.807412+0000","last_unstale":"2026-03-09T18:29:03.826353+0000","last_undegraded":"2026-03-09T18:29:03.826353+0000","last_fullsized":"2026-03-09T18:29:03.826353+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:24:23.736164+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":71,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.646601+0000","last_change":"2026-03-09T18:28:58.758395+0000","last_active":"2026-03-09T18:29:04.646601+0000","last_peered":"2026-03-09T18:29:04.646601+0000","last_clean":"2026-03-09T18:29:04.646601+0000","last_became_active":"2026-03-09T18:28:58.756605+0000","last_became_peered":"2026-03-09T18:28:58.756605+0000","last_unstale":"2026-03-09T18:29:04.646601+0000","last_undegraded":"2026-03-09T18:29:04.646601+0000","last_fullsized":"2026-03-09T18:29:04.646601+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:19:05.758695+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.829010+0000","last_change":"2026-03-09T18:28:56.754612+0000","last_active":"2026-03-09T18:29:03.829010+0000","last_peered":"2026-03-09T18:29:03.829010+0000","last_clean":"2026-03-09T18:29:03.829010+0000","last_became_active":"2026-03-09T18:28:56.754491+0000","last_became_peered":"2026-03-09T18:28:56.754491+0000","last_unstale":"2026-03-09T18:29:03.829010+0000","last_undegraded":"2026-03-09T18:29:03.829010+0000","last_fullsized":"2026-03-09T18:29:03.829010+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:50.967887+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709137+0000","last_change":"2026-03-09T18:28:58.733692+0000","last_active":"2026-03-09T18:29:07.709137+0000","last_peered":"2026-03-09T18:29:07.709137+0000","last_clean":"2026-03-09T18:29:07.709137+0000","last_became_active":"2026-03-09T18:28:56.740706+0000","last_became_peered":"2026-03-09T18:28:56.740706+0000","last_unstale":"2026-03-09T18:29:07.709137+0000","last_undegraded":"2026-03-09T18:29:07.709137+0000","last_fullsized":"2026-03-09T18:29:07.709137+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:09.499101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00043812000000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819544+0000","last_change":"2026-03-09T18:29:00.811774+0000","last_active":"2026-03-09T18:29:03.819544+0000","last_peered":"2026-03-09T18:29:03.819544+0000","last_clean":"2026-03-09T18:29:03.819544+0000","last_became_active":"2026-03-09T18:29:00.810734+0000","last_became_peered":"2026-03-09T18:29:00.810734+0000","last_unstale":"2026-03-09T18:29:03.819544+0000","last_undegraded":"2026-03-09T18:29:03.819544+0000","last_fullsized":"2026-03-09T18:29:03.819544+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:17:46.567639+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816150+0000","last_change":"2026-03-09T18:29:02.835124+0000","last_active":"2026-03-09T18:29:03.816150+0000","last_peered":"2026-03-09T18:29:03.816150+0000","last_clean":"2026-03-09T18:29:03.816150+0000","last_became_active":"2026-03-09T18:29:02.833514+0000","last_became_peered":"2026-03-09T18:29:02.833514+0000","last_unstale":"2026-03-09T18:29:03.816150+0000","last_undegraded":"2026-03-09T18:29:03.816150+0000","last_fullsized":"2026-03-09T18:29:03.816150+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:53:44.975182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":48,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.549744+0000","last_change":"2026-03-09T18:28:58.752148+0000","last_active":"2026-03-09T18:29:04.549744+0000","last_peered":"2026-03-09T18:29:04.549744+0000","last_clean":"2026-03-09T18:29:04.549744+0000","last_became_active":"2026-03-09T18:28:58.751599+0000","last_became_peered":"2026-03-09T18:28:58.751599+0000","last_unstale":"2026-03-09T18:29:04.549744+0000","last_undegraded":"2026-03-09T18:29:04.549744+0000","last_fullsized":"2026-03-09T18:29:04.549744+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:34:02.381971+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826400+0000","last_change":"2026-03-09T18:28:56.754725+0000","last_active":"2026-03-09T18:29:03.826400+0000","last_peered":"2026-03-09T18:29:03.826400+0000","last_clean":"2026-03-09T18:29:03.826400+0000","last_became_active":"2026-03-09T18:28:56.754489+0000","last_became_peered":"2026-03-09T18:28:56.754489+0000","last_unstale":"2026-03-09T18:29:03.826400+0000","last_undegraded":"2026-03-09T18:29:03.826400+0000","last_fullsized":"2026-03-09T18:29:03.826400+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:53:44.130075+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798668+0000","last_change":"2026-03-09T18:28:58.745578+0000","last_active":"2026-03-09T18:29:03.798668+0000","last_peered":"2026-03-09T18:29:03.798668+0000","last_clean":"2026-03-09T18:29:03.798668+0000","last_became_active":"2026-03-09T18:28:56.739995+0000","last_became_peered":"2026-03-09T18:28:56.739995+0000","last_unstale":"2026-03-09T18:29:03.798668+0000","last_undegraded":"2026-03-09T18:29:03.798668+0000","last_fullsized":"2026-03-09T18:29:03.798668+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:44:23.980904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00082507499999999998,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.6","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798702+0000","last_change":"2026-03-09T18:29:00.809433+0000","last_active":"2026-03-09T18:29:03.798702+0000","last_peered":"2026-03-09T18:29:03.798702+0000","last_clean":"2026-03-09T18:29:03.798702+0000","last_became_active":"2026-03-09T18:29:00.808179+0000","last_became_peered":"2026-03-09T18:29:00.808179+0000","last_unstale":"2026-03-09T18:29:03.798702+0000","last_undegraded":"2026-03-09T18:29:03.798702+0000","last_fullsized":"2026-03-09T18:29:03.798702+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:55:18.612204+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798624+0000","last_change":"2026-03-09T18:29:02.838489+0000","last_active":"2026-03-09T18:29:03.798624+0000","last_peered":"2026-03-09T18:29:03.798624+0000","last_clean":"2026-03-09T18:29:03.798624+0000","last_became_active":"2026-03-09T18:29:02.838319+0000","last_became_peered":"2026-03-09T18:29:02.838319+0000","last_unstale":"2026-03-09T18:29:03.798624+0000","last_undegraded":"2026-03-09T18:29:03.798624+0000","last_fullsized":"2026-03-09T18:29:03.798624+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:29:15.884561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":39,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.563963+0000","last_change":"2026-03-09T18:28:58.759952+0000","last_active":"2026-03-09T18:29:04.563963+0000","last_peered":"2026-03-09T18:29:04.563963+0000","last_clean":"2026-03-09T18:29:04.563963+0000","last_became_active":"2026-03-09T18:28:58.759786+0000","last_became_peered":"2026-03-09T18:28:58.759786+0000","last_unstale":"2026-03-09T18:29:04.563963+0000","last_undegraded":"2026-03-09T18:29:04.563963+0000","last_fullsized":"2026-03-09T18:29:04.563963+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:12:19.211826+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.001389+0000","last_change":"2026-03-09T18:28:56.751086+0000","last_active":"2026-03-09T18:29:04.001389+0000","last_peered":"2026-03-09T18:29:04.001389+0000","last_clean":"2026-03-09T18:29:04.001389+0000","last_became_active":"2026-03-09T18:28:56.750738+0000","last_became_peered":"2026-03-09T18:28:56.750738+0000","last_unstale":"2026-03-09T18:29:04.001389+0000","last_undegraded":"2026-03-09T18:29:04.001389+0000","last_fullsized":"2026-03-09T18:29:04.001389+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:45.415664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:06.636357+0000","last_change":"2026-03-09T18:28:58.955759+0000","last_active":"2026-03-09T18:29:06.636357+0000","last_peered":"2026-03-09T18:29:06.636357+0000","last_clean":"2026-03-09T18:29:06.636357+0000","last_became_active":"2026-03-09T18:28:56.746218+0000","last_became_peered":"2026-03-09T18:28:56.746218+0000","last_unstale":"2026-03-09T18:29:06.636357+0000","last_undegraded":"2026-03-09T18:29:06.636357+0000","last_fullsized":"2026-03-09T18:29:06.636357+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:08:22.225165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00051439300000000003,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708978+0000","last_change":"2026-03-09T18:29:00.809419+0000","last_active":"2026-03-09T18:29:07.708978+0000","last_peered":"2026-03-09T18:29:07.708978+0000","last_clean":"2026-03-09T18:29:07.708978+0000","last_became_active":"2026-03-09T18:29:00.808721+0000","last_became_peered":"2026-03-09T18:29:00.808721+0000","last_unstale":"2026-03-09T18:29:07.708978+0000","last_undegraded":"2026-03-09T18:29:07.708978+0000","last_fullsized":"2026-03-09T18:29:07.708978+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:33:31.802189+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826171+0000","last_change":"2026-03-09T18:29:02.825333+0000","last_active":"2026-03-09T18:29:03.826171+0000","last_peered":"2026-03-09T18:29:03.826171+0000","last_clean":"2026-03-09T18:29:03.826171+0000","last_became_active":"2026-03-09T18:29:02.825075+0000","last_became_peered":"2026-03-09T18:29:02.825075+0000","last_unstale":"2026-03-09T18:29:03.826171+0000","last_undegraded":"2026-03-09T18:29:03.826171+0000","last_fullsized":"2026-03-09T18:29:03.826171+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:11:20.002612+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.653668+0000","last_change":"2026-03-09T18:28:58.959895+0000","last_active":"2026-03-09T18:29:04.653668+0000","last_peered":"2026-03-09T18:29:04.653668+0000","last_clean":"2026-03-09T18:29:04.653668+0000","last_became_active":"2026-03-09T18:28:58.959681+0000","last_became_peered":"2026-03-09T18:28:58.959681+0000","last_unstale":"2026-03-09T18:29:04.653668+0000","last_undegraded":"2026-03-09T18:29:04.653668+0000","last_fullsized":"2026-03-09T18:29:04.653668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:53:14.109787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816576+0000","last_change":"2026-03-09T18:28:56.744432+0000","last_active":"2026-03-09T18:29:03.816576+0000","last_peered":"2026-03-09T18:29:03.816576+0000","last_clean":"2026-03-09T18:29:03.816576+0000","last_became_active":"2026-03-09T18:28:56.744310+0000","last_became_peered":"2026-03-09T18:28:56.744310+0000","last_unstale":"2026-03-09T18:29:03.816576+0000","last_undegraded":"2026-03-09T18:29:03.816576+0000","last_fullsized":"2026-03-09T18:29:03.816576+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:48:45.163023+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":35,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799184+0000","last_change":"2026-03-09T18:28:53.997687+0000","last_active":"2026-03-09T18:29:03.799184+0000","last_peered":"2026-03-09T18:29:03.799184+0000","last_clean":"2026-03-09T18:29:03.799184+0000","last_became_active":"2026-03-09T18:28:53.690811+0000","last_became_peered":"2026-03-09T18:28:53.690811+0000","last_unstale":"2026-03-09T18:29:03.799184+0000","last_undegraded":"2026-03-09T18:29:03.799184+0000","last_fullsized":"2026-03-09T18:29:03.799184+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:01.605375+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:01.605375+0000","last_clean_scrub_stamp":"2026-03-09T18:28:01.605375+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:58.145626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799262+0000","last_change":"2026-03-09T18:29:00.824326+0000","last_active":"2026-03-09T18:29:03.799262+0000","last_peered":"2026-03-09T18:29:03.799262+0000","last_clean":"2026-03-09T18:29:03.799262+0000","last_became_active":"2026-03-09T18:29:00.824243+0000","last_became_peered":"2026-03-09T18:29:00.824243+0000","last_unstale":"2026-03-09T18:29:03.799262+0000","last_undegraded":"2026-03-09T18:29:03.799262+0000","last_fullsized":"2026-03-09T18:29:03.799262+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:14:15.092118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708853+0000","last_change":"2026-03-09T18:29:02.827077+0000","last_active":"2026-03-09T18:29:07.708853+0000","last_peered":"2026-03-09T18:29:07.708853+0000","last_clean":"2026-03-09T18:29:07.708853+0000","last_became_active":"2026-03-09T18:29:02.826978+0000","last_became_peered":"2026-03-09T18:29:02.826978+0000","last_unstale":"2026-03-09T18:29:07.708853+0000","last_undegraded":"2026-03-09T18:29:07.708853+0000","last_fullsized":"2026-03-09T18:29:07.708853+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:19:34.097009+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.658187+0000","last_change":"2026-03-09T18:28:58.779278+0000","last_active":"2026-03-09T18:29:04.658187+0000","last_peered":"2026-03-09T18:29:04.658187+0000","last_clean":"2026-03-09T18:29:04.658187+0000","last_became_active":"2026-03-09T18:28:58.777497+0000","last_became_peered":"2026-03-09T18:28:58.777497+0000","last_unstale":"2026-03-09T18:29:04.658187+0000","last_undegraded":"2026-03-09T18:29:04.658187+0000","last_fullsized":"2026-03-09T18:29:04.658187+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:02:21.813963+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828923+0000","last_change":"2026-03-09T18:28:56.743290+0000","last_active":"2026-03-09T18:29:03.828923+0000","last_peered":"2026-03-09T18:29:03.828923+0000","last_clean":"2026-03-09T18:29:03.828923+0000","last_became_active":"2026-03-09T18:28:56.742847+0000","last_became_peered":"2026-03-09T18:28:56.742847+0000","last_unstale":"2026-03-09T18:29:03.828923+0000","last_undegraded":"2026-03-09T18:29:03.828923+0000","last_fullsized":"2026-03-09T18:29:03.828923+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:01:23.665747+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709482+0000","last_change":"2026-03-09T18:29:00.816651+0000","last_active":"2026-03-09T18:29:07.709482+0000","last_peered":"2026-03-09T18:29:07.709482+0000","last_clean":"2026-03-09T18:29:07.709482+0000","last_became_active":"2026-03-09T18:29:00.816549+0000","last_became_peered":"2026-03-09T18:29:00.816549+0000","last_unstale":"2026-03-09T18:29:07.709482+0000","last_undegraded":"2026-03-09T18:29:07.709482+0000","last_fullsized":"2026-03-09T18:29:07.709482+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:34:29.374887+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815825+0000","last_change":"2026-03-09T18:29:02.838484+0000","last_active":"2026-03-09T18:29:03.815825+0000","last_peered":"2026-03-09T18:29:03.815825+0000","last_clean":"2026-03-09T18:29:03.815825+0000","last_became_active":"2026-03-09T18:29:02.833798+0000","last_became_peered":"2026-03-09T18:29:02.833798+0000","last_unstale":"2026-03-09T18:29:03.815825+0000","last_undegraded":"2026-03-09T18:29:03.815825+0000","last_fullsized":"2026-03-09T18:29:03.815825+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:40:52.341510+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":49,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.655565+0000","last_change":"2026-03-09T18:28:58.768445+0000","last_active":"2026-03-09T18:29:04.655565+0000","last_peered":"2026-03-09T18:29:04.655565+0000","last_clean":"2026-03-09T18:29:04.655565+0000","last_became_active":"2026-03-09T18:28:58.767278+0000","last_became_peered":"2026-03-09T18:28:58.767278+0000","last_unstale":"2026-03-09T18:29:04.655565+0000","last_undegraded":"2026-03-09T18:29:04.655565+0000","last_fullsized":"2026-03-09T18:29:04.655565+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:15:17.008984+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839795+0000","last_change":"2026-03-09T18:28:56.753643+0000","last_active":"2026-03-09T18:29:03.839795+0000","last_peered":"2026-03-09T18:29:03.839795+0000","last_clean":"2026-03-09T18:29:03.839795+0000","last_became_active":"2026-03-09T18:28:56.751726+0000","last_became_peered":"2026-03-09T18:28:56.751726+0000","last_unstale":"2026-03-09T18:29:03.839795+0000","last_undegraded":"2026-03-09T18:29:03.839795+0000","last_fullsized":"2026-03-09T18:29:03.839795+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:51:52.791289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826808+0000","last_change":"2026-03-09T18:29:00.809643+0000","last_active":"2026-03-09T18:29:03.826808+0000","last_peered":"2026-03-09T18:29:03.826808+0000","last_clean":"2026-03-09T18:29:03.826808+0000","last_became_active":"2026-03-09T18:29:00.809166+0000","last_became_peered":"2026-03-09T18:29:00.809166+0000","last_unstale":"2026-03-09T18:29:03.826808+0000","last_undegraded":"2026-03-09T18:29:03.826808+0000","last_fullsized":"2026-03-09T18:29:03.826808+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:09:16.649161+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798767+0000","last_change":"2026-03-09T18:29:02.835760+0000","last_active":"2026-03-09T18:29:03.798767+0000","last_peered":"2026-03-09T18:29:03.798767+0000","last_clean":"2026-03-09T18:29:03.798767+0000","last_became_active":"2026-03-09T18:29:02.835654+0000","last_became_peered":"2026-03-09T18:29:02.835654+0000","last_unstale":"2026-03-09T18:29:03.798767+0000","last_undegraded":"2026-03-09T18:29:03.798767+0000","last_fullsized":"2026-03-09T18:29:03.798767+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:58.669654+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.583000+0000","last_change":"2026-03-09T18:28:58.753844+0000","last_active":"2026-03-09T18:29:04.583000+0000","last_peered":"2026-03-09T18:29:04.583000+0000","last_clean":"2026-03-09T18:29:04.583000+0000","last_became_active":"2026-03-09T18:28:58.751882+0000","last_became_peered":"2026-03-09T18:28:58.751882+0000","last_unstale":"2026-03-09T18:29:04.583000+0000","last_undegraded":"2026-03-09T18:29:04.583000+0000","last_fullsized":"2026-03-09T18:29:04.583000+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:29.285191+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.000518+0000","last_change":"2026-03-09T18:28:56.739248+0000","last_active":"2026-03-09T18:29:04.000518+0000","last_peered":"2026-03-09T18:29:04.000518+0000","last_clean":"2026-03-09T18:29:04.000518+0000","last_became_active":"2026-03-09T18:28:56.739150+0000","last_became_peered":"2026-03-09T18:28:56.739150+0000","last_unstale":"2026-03-09T18:29:04.000518+0000","last_undegraded":"2026-03-09T18:29:04.000518+0000","last_fullsized":"2026-03-09T18:29:04.000518+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:25:27.605504+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":30,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.194048+0000","last_change":"2026-03-09T18:29:00.809902+0000","last_active":"2026-03-09T18:29:04.194048+0000","last_peered":"2026-03-09T18:29:04.194048+0000","last_clean":"2026-03-09T18:29:04.194048+0000","last_became_active":"2026-03-09T18:29:00.808679+0000","last_became_peered":"2026-03-09T18:29:00.808679+0000","last_unstale":"2026-03-09T18:29:04.194048+0000","last_undegraded":"2026-03-09T18:29:04.194048+0000","last_fullsized":"2026-03-09T18:29:04.194048+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:37:33.319437+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828652+0000","last_change":"2026-03-09T18:29:02.831238+0000","last_active":"2026-03-09T18:29:03.828652+0000","last_peered":"2026-03-09T18:29:03.828652+0000","last_clean":"2026-03-09T18:29:03.828652+0000","last_became_active":"2026-03-09T18:29:02.830520+0000","last_became_peered":"2026-03-09T18:29:02.830520+0000","last_unstale":"2026-03-09T18:29:03.828652+0000","last_undegraded":"2026-03-09T18:29:03.828652+0000","last_fullsized":"2026-03-09T18:29:03.828652+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:48:23.849047+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.626067+0000","last_change":"2026-03-09T18:28:58.756963+0000","last_active":"2026-03-09T18:29:04.626067+0000","last_peered":"2026-03-09T18:29:04.626067+0000","last_clean":"2026-03-09T18:29:04.626067+0000","last_became_active":"2026-03-09T18:28:58.756834+0000","last_became_peered":"2026-03-09T18:28:58.756834+0000","last_unstale":"2026-03-09T18:29:04.626067+0000","last_undegraded":"2026-03-09T18:29:04.626067+0000","last_fullsized":"2026-03-09T18:29:04.626067+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:48:38.692797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709080+0000","last_change":"2026-03-09T18:28:56.741274+0000","last_active":"2026-03-09T18:29:07.709080+0000","last_peered":"2026-03-09T18:29:07.709080+0000","last_clean":"2026-03-09T18:29:07.709080+0000","last_became_active":"2026-03-09T18:28:56.741207+0000","last_became_peered":"2026-03-09T18:28:56.741207+0000","last_unstale":"2026-03-09T18:29:07.709080+0000","last_undegraded":"2026-03-09T18:29:07.709080+0000","last_fullsized":"2026-03-09T18:29:07.709080+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:42:21.087080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798841+0000","last_change":"2026-03-09T18:29:00.817435+0000","last_active":"2026-03-09T18:29:03.798841+0000","last_peered":"2026-03-09T18:29:03.798841+0000","last_clean":"2026-03-09T18:29:03.798841+0000","last_became_active":"2026-03-09T18:29:00.817229+0000","last_became_peered":"2026-03-09T18:29:00.817229+0000","last_unstale":"2026-03-09T18:29:03.798841+0000","last_undegraded":"2026-03-09T18:29:03.798841+0000","last_fullsized":"2026-03-09T18:29:03.798841+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:23:06.790852+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.820010+0000","last_change":"2026-03-09T18:29:02.824916+0000","last_active":"2026-03-09T18:29:03.820010+0000","last_peered":"2026-03-09T18:29:03.820010+0000","last_clean":"2026-03-09T18:29:03.820010+0000","last_became_active":"2026-03-09T18:29:02.824795+0000","last_became_peered":"2026-03-09T18:29:02.824795+0000","last_unstale":"2026-03-09T18:29:03.820010+0000","last_undegraded":"2026-03-09T18:29:03.820010+0000","last_fullsized":"2026-03-09T18:29:03.820010+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:47:00.539746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":52,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.685863+0000","last_change":"2026-03-09T18:28:58.959945+0000","last_active":"2026-03-09T18:29:04.685863+0000","last_peered":"2026-03-09T18:29:04.685863+0000","last_clean":"2026-03-09T18:29:04.685863+0000","last_became_active":"2026-03-09T18:28:58.959787+0000","last_became_peered":"2026-03-09T18:28:58.959787+0000","last_unstale":"2026-03-09T18:29:04.685863+0000","last_undegraded":"2026-03-09T18:29:04.685863+0000","last_fullsized":"2026-03-09T18:29:04.685863+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:47:47.176049+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799130+0000","last_change":"2026-03-09T18:28:56.742054+0000","last_active":"2026-03-09T18:29:03.799130+0000","last_peered":"2026-03-09T18:29:03.799130+0000","last_clean":"2026-03-09T18:29:03.799130+0000","last_became_active":"2026-03-09T18:28:56.741959+0000","last_became_peered":"2026-03-09T18:28:56.741959+0000","last_unstale":"2026-03-09T18:29:03.799130+0000","last_undegraded":"2026-03-09T18:29:03.799130+0000","last_fullsized":"2026-03-09T18:29:03.799130+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:09:32.302336+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798887+0000","last_change":"2026-03-09T18:29:00.806973+0000","last_active":"2026-03-09T18:29:03.798887+0000","last_peered":"2026-03-09T18:29:03.798887+0000","last_clean":"2026-03-09T18:29:03.798887+0000","last_became_active":"2026-03-09T18:29:00.806776+0000","last_became_peered":"2026-03-09T18:29:00.806776+0000","last_unstale":"2026-03-09T18:29:03.798887+0000","last_undegraded":"2026-03-09T18:29:03.798887+0000","last_fullsized":"2026-03-09T18:29:03.798887+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:09.973182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799115+0000","last_change":"2026-03-09T18:29:02.838553+0000","last_active":"2026-03-09T18:29:03.799115+0000","last_peered":"2026-03-09T18:29:03.799115+0000","last_clean":"2026-03-09T18:29:03.799115+0000","last_became_active":"2026-03-09T18:29:02.838400+0000","last_became_peered":"2026-03-09T18:29:02.838400+0000","last_unstale":"2026-03-09T18:29:03.799115+0000","last_undegraded":"2026-03-09T18:29:03.799115+0000","last_fullsized":"2026-03-09T18:29:03.799115+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:00:37.254278+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":44,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.389370+0000","last_change":"2026-03-09T18:28:58.768598+0000","last_active":"2026-03-09T18:29:04.389370+0000","last_peered":"2026-03-09T18:29:04.389370+0000","last_clean":"2026-03-09T18:29:04.389370+0000","last_became_active":"2026-03-09T18:28:58.767418+0000","last_became_peered":"2026-03-09T18:28:58.767418+0000","last_unstale":"2026-03-09T18:29:04.389370+0000","last_undegraded":"2026-03-09T18:29:04.389370+0000","last_fullsized":"2026-03-09T18:29:04.389370+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:16.961128+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799083+0000","last_change":"2026-03-09T18:28:56.745496+0000","last_active":"2026-03-09T18:29:03.799083+0000","last_peered":"2026-03-09T18:29:03.799083+0000","last_clean":"2026-03-09T18:29:03.799083+0000","last_became_active":"2026-03-09T18:28:56.744938+0000","last_became_peered":"2026-03-09T18:28:56.744938+0000","last_unstale":"2026-03-09T18:29:03.799083+0000","last_undegraded":"2026-03-09T18:29:03.799083+0000","last_fullsized":"2026-03-09T18:29:03.799083+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:06:38.340482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798797+0000","last_change":"2026-03-09T18:29:00.806668+0000","last_active":"2026-03-09T18:29:03.798797+0000","last_peered":"2026-03-09T18:29:03.798797+0000","last_clean":"2026-03-09T18:29:03.798797+0000","last_became_active":"2026-03-09T18:29:00.806474+0000","last_became_peered":"2026-03-09T18:29:00.806474+0000","last_unstale":"2026-03-09T18:29:03.798797+0000","last_undegraded":"2026-03-09T18:29:03.798797+0000","last_fullsized":"2026-03-09T18:29:03.798797+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:09:22.649859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816037+0000","last_change":"2026-03-09T18:29:02.835169+0000","last_active":"2026-03-09T18:29:03.816037+0000","last_peered":"2026-03-09T18:29:03.816037+0000","last_clean":"2026-03-09T18:29:03.816037+0000","last_became_active":"2026-03-09T18:29:02.833612+0000","last_became_peered":"2026-03-09T18:29:02.833612+0000","last_unstale":"2026-03-09T18:29:03.816037+0000","last_undegraded":"2026-03-09T18:29:03.816037+0000","last_fullsized":"2026-03-09T18:29:03.816037+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:17.473621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708709+0000","last_change":"2026-03-09T18:28:58.956362+0000","last_active":"2026-03-09T18:29:07.708709+0000","last_peered":"2026-03-09T18:29:07.708709+0000","last_clean":"2026-03-09T18:29:07.708709+0000","last_became_active":"2026-03-09T18:28:58.956174+0000","last_became_peered":"2026-03-09T18:28:58.956174+0000","last_unstale":"2026-03-09T18:29:07.708709+0000","last_undegraded":"2026-03-09T18:29:07.708709+0000","last_fullsized":"2026-03-09T18:29:07.708709+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:39:48.856610+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'2","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.031059+0000","last_change":"2026-03-09T18:28:56.746496+0000","last_active":"2026-03-09T18:29:04.031059+0000","last_peered":"2026-03-09T18:29:04.031059+0000","last_clean":"2026-03-09T18:29:04.031059+0000","last_became_active":"2026-03-09T18:28:56.746341+0000","last_became_peered":"2026-03-09T18:28:56.746341+0000","last_unstale":"2026-03-09T18:29:04.031059+0000","last_undegraded":"2026-03-09T18:29:04.031059+0000","last_fullsized":"2026-03-09T18:29:04.031059+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:16.071845+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.261483+0000","last_change":"2026-03-09T18:29:00.826374+0000","last_active":"2026-03-09T18:29:04.261483+0000","last_peered":"2026-03-09T18:29:04.261483+0000","last_clean":"2026-03-09T18:29:04.261483+0000","last_became_active":"2026-03-09T18:29:00.826226+0000","last_became_peered":"2026-03-09T18:29:00.826226+0000","last_unstale":"2026-03-09T18:29:04.261483+0000","last_undegraded":"2026-03-09T18:29:04.261483+0000","last_fullsized":"2026-03-09T18:29:04.261483+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:34:57.862238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.708681+0000","last_change":"2026-03-09T18:29:02.809593+0000","last_active":"2026-03-09T18:29:07.708681+0000","last_peered":"2026-03-09T18:29:07.708681+0000","last_clean":"2026-03-09T18:29:07.708681+0000","last_became_active":"2026-03-09T18:29:02.809359+0000","last_became_peered":"2026-03-09T18:29:02.809359+0000","last_unstale":"2026-03-09T18:29:07.708681+0000","last_undegraded":"2026-03-09T18:29:07.708681+0000","last_fullsized":"2026-03-09T18:29:07.708681+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:54:58.267020+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839162+0000","last_change":"2026-03-09T18:28:56.747934+0000","last_active":"2026-03-09T18:29:03.839162+0000","last_peered":"2026-03-09T18:29:03.839162+0000","last_clean":"2026-03-09T18:29:03.839162+0000","last_became_active":"2026-03-09T18:28:56.747628+0000","last_became_peered":"2026-03-09T18:28:56.747628+0000","last_unstale":"2026-03-09T18:29:03.839162+0000","last_undegraded":"2026-03-09T18:29:03.839162+0000","last_fullsized":"2026-03-09T18:29:03.839162+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:00:12.102208+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":30,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.604890+0000","last_change":"2026-03-09T18:28:58.958060+0000","last_active":"2026-03-09T18:29:04.604890+0000","last_peered":"2026-03-09T18:29:04.604890+0000","last_clean":"2026-03-09T18:29:04.604890+0000","last_became_active":"2026-03-09T18:28:58.957623+0000","last_became_peered":"2026-03-09T18:28:58.957623+0000","last_unstale":"2026-03-09T18:29:04.604890+0000","last_undegraded":"2026-03-09T18:29:04.604890+0000","last_fullsized":"2026-03-09T18:29:04.604890+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:44:42.386720+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709224+0000","last_change":"2026-03-09T18:29:00.827277+0000","last_active":"2026-03-09T18:29:07.709224+0000","last_peered":"2026-03-09T18:29:07.709224+0000","last_clean":"2026-03-09T18:29:07.709224+0000","last_became_active":"2026-03-09T18:29:00.816344+0000","last_became_peered":"2026-03-09T18:29:00.816344+0000","last_unstale":"2026-03-09T18:29:07.709224+0000","last_undegraded":"2026-03-09T18:29:07.709224+0000","last_fullsized":"2026-03-09T18:29:07.709224+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:07:01.246586+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799473+0000","last_change":"2026-03-09T18:29:02.825834+0000","last_active":"2026-03-09T18:29:03.799473+0000","last_peered":"2026-03-09T18:29:03.799473+0000","last_clean":"2026-03-09T18:29:03.799473+0000","last_became_active":"2026-03-09T18:29:02.825125+0000","last_became_peered":"2026-03-09T18:29:02.825125+0000","last_unstale":"2026-03-09T18:29:03.799473+0000","last_undegraded":"2026-03-09T18:29:03.799473+0000","last_fullsized":"2026-03-09T18:29:03.799473+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:57:52.986881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.413864+0000","last_change":"2026-03-09T18:28:58.957319+0000","last_active":"2026-03-09T18:29:04.413864+0000","last_peered":"2026-03-09T18:29:04.413864+0000","last_clean":"2026-03-09T18:29:04.413864+0000","last_became_active":"2026-03-09T18:28:58.957081+0000","last_became_peered":"2026-03-09T18:28:58.957081+0000","last_unstale":"2026-03-09T18:29:04.413864+0000","last_undegraded":"2026-03-09T18:29:04.413864+0000","last_fullsized":"2026-03-09T18:29:04.413864+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:46:24.182293+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798965+0000","last_change":"2026-03-09T18:28:56.745571+0000","last_active":"2026-03-09T18:29:03.798965+0000","last_peered":"2026-03-09T18:29:03.798965+0000","last_clean":"2026-03-09T18:29:03.798965+0000","last_became_active":"2026-03-09T18:28:56.745107+0000","last_became_peered":"2026-03-09T18:28:56.745107+0000","last_unstale":"2026-03-09T18:29:03.798965+0000","last_undegraded":"2026-03-09T18:29:03.798965+0000","last_fullsized":"2026-03-09T18:29:03.798965+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:50:16.216033+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815956+0000","last_change":"2026-03-09T18:29:00.822208+0000","last_active":"2026-03-09T18:29:03.815956+0000","last_peered":"2026-03-09T18:29:03.815956+0000","last_clean":"2026-03-09T18:29:03.815956+0000","last_became_active":"2026-03-09T18:29:00.822072+0000","last_became_peered":"2026-03-09T18:29:00.822072+0000","last_unstale":"2026-03-09T18:29:03.815956+0000","last_undegraded":"2026-03-09T18:29:03.815956+0000","last_fullsized":"2026-03-09T18:29:03.815956+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:17:32.895635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798390+0000","last_change":"2026-03-09T18:29:02.823224+0000","last_active":"2026-03-09T18:29:03.798390+0000","last_peered":"2026-03-09T18:29:03.798390+0000","last_clean":"2026-03-09T18:29:03.798390+0000","last_became_active":"2026-03-09T18:29:02.823141+0000","last_became_peered":"2026-03-09T18:29:02.823141+0000","last_unstale":"2026-03-09T18:29:03.798390+0000","last_undegraded":"2026-03-09T18:29:03.798390+0000","last_fullsized":"2026-03-09T18:29:03.798390+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:30.376630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709357+0000","last_change":"2026-03-09T18:28:58.956321+0000","last_active":"2026-03-09T18:29:07.709357+0000","last_peered":"2026-03-09T18:29:07.709357+0000","last_clean":"2026-03-09T18:29:07.709357+0000","last_became_active":"2026-03-09T18:28:58.956124+0000","last_became_peered":"2026-03-09T18:28:58.956124+0000","last_unstale":"2026-03-09T18:29:07.709357+0000","last_undegraded":"2026-03-09T18:29:07.709357+0000","last_fullsized":"2026-03-09T18:29:07.709357+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:40:57.133733+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819562+0000","last_change":"2026-03-09T18:28:56.743533+0000","last_active":"2026-03-09T18:29:03.819562+0000","last_peered":"2026-03-09T18:29:03.819562+0000","last_clean":"2026-03-09T18:29:03.819562+0000","last_became_active":"2026-03-09T18:28:56.743367+0000","last_became_peered":"2026-03-09T18:28:56.743367+0000","last_unstale":"2026-03-09T18:29:03.819562+0000","last_undegraded":"2026-03-09T18:29:03.819562+0000","last_fullsized":"2026-03-09T18:29:03.819562+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:57:42.021880+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.259553+0000","last_change":"2026-03-09T18:29:00.822143+0000","last_active":"2026-03-09T18:29:04.259553+0000","last_peered":"2026-03-09T18:29:04.259553+0000","last_clean":"2026-03-09T18:29:04.259553+0000","last_became_active":"2026-03-09T18:29:00.821941+0000","last_became_peered":"2026-03-09T18:29:00.821941+0000","last_unstale":"2026-03-09T18:29:04.259553+0000","last_undegraded":"2026-03-09T18:29:04.259553+0000","last_fullsized":"2026-03-09T18:29:04.259553+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:45.771903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828334+0000","last_change":"2026-03-09T18:29:02.830170+0000","last_active":"2026-03-09T18:29:03.828334+0000","last_peered":"2026-03-09T18:29:03.828334+0000","last_clean":"2026-03-09T18:29:03.828334+0000","last_became_active":"2026-03-09T18:29:02.830002+0000","last_became_peered":"2026-03-09T18:29:02.830002+0000","last_unstale":"2026-03-09T18:29:03.828334+0000","last_undegraded":"2026-03-09T18:29:03.828334+0000","last_fullsized":"2026-03-09T18:29:03.828334+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:29:44.228857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.637164+0000","last_change":"2026-03-09T18:28:58.957812+0000","last_active":"2026-03-09T18:29:04.637164+0000","last_peered":"2026-03-09T18:29:04.637164+0000","last_clean":"2026-03-09T18:29:04.637164+0000","last_became_active":"2026-03-09T18:28:58.957000+0000","last_became_peered":"2026-03-09T18:28:58.957000+0000","last_unstale":"2026-03-09T18:29:04.637164+0000","last_undegraded":"2026-03-09T18:29:04.637164+0000","last_fullsized":"2026-03-09T18:29:04.637164+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:10:07.235258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798859+0000","last_change":"2026-03-09T18:28:56.745661+0000","last_active":"2026-03-09T18:29:03.798859+0000","last_peered":"2026-03-09T18:29:03.798859+0000","last_clean":"2026-03-09T18:29:03.798859+0000","last_became_active":"2026-03-09T18:28:56.745221+0000","last_became_peered":"2026-03-09T18:28:56.745221+0000","last_unstale":"2026-03-09T18:29:03.798859+0000","last_undegraded":"2026-03-09T18:29:03.798859+0000","last_fullsized":"2026-03-09T18:29:03.798859+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:43.697531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709538+0000","last_change":"2026-03-09T18:29:00.808285+0000","last_active":"2026-03-09T18:29:07.709538+0000","last_peered":"2026-03-09T18:29:07.709538+0000","last_clean":"2026-03-09T18:29:07.709538+0000","last_became_active":"2026-03-09T18:29:00.808157+0000","last_became_peered":"2026-03-09T18:29:00.808157+0000","last_unstale":"2026-03-09T18:29:07.709538+0000","last_undegraded":"2026-03-09T18:29:07.709538+0000","last_fullsized":"2026-03-09T18:29:07.709538+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:35:41.854119+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819115+0000","last_change":"2026-03-09T18:29:02.822590+0000","last_active":"2026-03-09T18:29:03.819115+0000","last_peered":"2026-03-09T18:29:03.819115+0000","last_clean":"2026-03-09T18:29:03.819115+0000","last_became_active":"2026-03-09T18:29:02.822402+0000","last_became_peered":"2026-03-09T18:29:02.822402+0000","last_unstale":"2026-03-09T18:29:03.819115+0000","last_undegraded":"2026-03-09T18:29:03.819115+0000","last_fullsized":"2026-03-09T18:29:03.819115+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:05:08.032760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.618810+0000","last_change":"2026-03-09T18:28:58.778550+0000","last_active":"2026-03-09T18:29:04.618810+0000","last_peered":"2026-03-09T18:29:04.618810+0000","last_clean":"2026-03-09T18:29:04.618810+0000","last_became_active":"2026-03-09T18:28:58.777795+0000","last_became_peered":"2026-03-09T18:28:58.777795+0000","last_unstale":"2026-03-09T18:29:04.618810+0000","last_undegraded":"2026-03-09T18:29:04.618810+0000","last_fullsized":"2026-03-09T18:29:04.618810+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:17:45.029074+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.828862+0000","last_change":"2026-03-09T18:28:56.743372+0000","last_active":"2026-03-09T18:29:03.828862+0000","last_peered":"2026-03-09T18:29:03.828862+0000","last_clean":"2026-03-09T18:29:03.828862+0000","last_became_active":"2026-03-09T18:28:56.743215+0000","last_became_peered":"2026-03-09T18:28:56.743215+0000","last_unstale":"2026-03-09T18:29:03.828862+0000","last_undegraded":"2026-03-09T18:29:03.828862+0000","last_fullsized":"2026-03-09T18:29:03.828862+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:48:34.454253+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.825841+0000","last_change":"2026-03-09T18:29:00.809493+0000","last_active":"2026-03-09T18:29:03.825841+0000","last_peered":"2026-03-09T18:29:03.825841+0000","last_clean":"2026-03-09T18:29:03.825841+0000","last_became_active":"2026-03-09T18:29:00.808643+0000","last_became_peered":"2026-03-09T18:29:00.808643+0000","last_unstale":"2026-03-09T18:29:03.825841+0000","last_undegraded":"2026-03-09T18:29:03.825841+0000","last_fullsized":"2026-03-09T18:29:03.825841+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:51:27.723354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.815871+0000","last_change":"2026-03-09T18:29:02.835220+0000","last_active":"2026-03-09T18:29:03.815871+0000","last_peered":"2026-03-09T18:29:03.815871+0000","last_clean":"2026-03-09T18:29:03.815871+0000","last_became_active":"2026-03-09T18:29:02.833685+0000","last_became_peered":"2026-03-09T18:29:02.833685+0000","last_unstale":"2026-03-09T18:29:03.815871+0000","last_undegraded":"2026-03-09T18:29:03.815871+0000","last_fullsized":"2026-03-09T18:29:03.815871+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:27:57.103570+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.512951+0000","last_change":"2026-03-09T18:28:58.752216+0000","last_active":"2026-03-09T18:29:04.512951+0000","last_peered":"2026-03-09T18:29:04.512951+0000","last_clean":"2026-03-09T18:29:04.512951+0000","last_became_active":"2026-03-09T18:28:58.751968+0000","last_became_peered":"2026-03-09T18:28:58.751968+0000","last_unstale":"2026-03-09T18:29:04.512951+0000","last_undegraded":"2026-03-09T18:29:04.512951+0000","last_fullsized":"2026-03-09T18:29:04.512951+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:52:04.891923+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798823+0000","last_change":"2026-03-09T18:28:56.745753+0000","last_active":"2026-03-09T18:29:03.798823+0000","last_peered":"2026-03-09T18:29:03.798823+0000","last_clean":"2026-03-09T18:29:03.798823+0000","last_became_active":"2026-03-09T18:28:56.745336+0000","last_became_peered":"2026-03-09T18:28:56.745336+0000","last_unstale":"2026-03-09T18:29:03.798823+0000","last_undegraded":"2026-03-09T18:29:03.798823+0000","last_fullsized":"2026-03-09T18:29:03.798823+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:11:46.879409+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816703+0000","last_change":"2026-03-09T18:29:00.813351+0000","last_active":"2026-03-09T18:29:03.816703+0000","last_peered":"2026-03-09T18:29:03.816703+0000","last_clean":"2026-03-09T18:29:03.816703+0000","last_became_active":"2026-03-09T18:29:00.813251+0000","last_became_peered":"2026-03-09T18:29:00.813251+0000","last_unstale":"2026-03-09T18:29:03.816703+0000","last_undegraded":"2026-03-09T18:29:03.816703+0000","last_fullsized":"2026-03-09T18:29:03.816703+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:10:04.604899+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819013+0000","last_change":"2026-03-09T18:29:02.809194+0000","last_active":"2026-03-09T18:29:03.819013+0000","last_peered":"2026-03-09T18:29:03.819013+0000","last_clean":"2026-03-09T18:29:03.819013+0000","last_became_active":"2026-03-09T18:29:02.809055+0000","last_became_peered":"2026-03-09T18:29:02.809055+0000","last_unstale":"2026-03-09T18:29:03.819013+0000","last_undegraded":"2026-03-09T18:29:03.819013+0000","last_fullsized":"2026-03-09T18:29:03.819013+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:19:06.415313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.669722+0000","last_change":"2026-03-09T18:28:58.957983+0000","last_active":"2026-03-09T18:29:04.669722+0000","last_peered":"2026-03-09T18:29:04.669722+0000","last_clean":"2026-03-09T18:29:04.669722+0000","last_became_active":"2026-03-09T18:28:58.957447+0000","last_became_peered":"2026-03-09T18:28:58.957447+0000","last_unstale":"2026-03-09T18:29:04.669722+0000","last_undegraded":"2026-03-09T18:29:04.669722+0000","last_fullsized":"2026-03-09T18:29:04.669722+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:01:23.474882+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T18:29:07.709004+0000","last_change":"2026-03-09T18:28:56.739190+0000","last_active":"2026-03-09T18:29:07.709004+0000","last_peered":"2026-03-09T18:29:07.709004+0000","last_clean":"2026-03-09T18:29:07.709004+0000","last_became_active":"2026-03-09T18:28:56.739067+0000","last_became_peered":"2026-03-09T18:28:56.739067+0000","last_unstale":"2026-03-09T18:29:07.709004+0000","last_undegraded":"2026-03-09T18:29:07.709004+0000","last_fullsized":"2026-03-09T18:29:07.709004+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:06:20.038676+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.799354+0000","last_change":"2026-03-09T18:29:00.826314+0000","last_active":"2026-03-09T18:29:03.799354+0000","last_peered":"2026-03-09T18:29:03.799354+0000","last_clean":"2026-03-09T18:29:03.799354+0000","last_became_active":"2026-03-09T18:29:00.826115+0000","last_became_peered":"2026-03-09T18:29:00.826115+0000","last_unstale":"2026-03-09T18:29:03.799354+0000","last_undegraded":"2026-03-09T18:29:03.799354+0000","last_fullsized":"2026-03-09T18:29:03.799354+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:00:16.431852+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816922+0000","last_change":"2026-03-09T18:29:02.830265+0000","last_active":"2026-03-09T18:29:03.816922+0000","last_peered":"2026-03-09T18:29:03.816922+0000","last_clean":"2026-03-09T18:29:03.816922+0000","last_became_active":"2026-03-09T18:29:02.830123+0000","last_became_peered":"2026-03-09T18:29:02.830123+0000","last_unstale":"2026-03-09T18:29:03.816922+0000","last_undegraded":"2026-03-09T18:29:03.816922+0000","last_fullsized":"2026-03-09T18:29:03.816922+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:10:01.879835+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":27,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.394739+0000","last_change":"2026-03-09T18:28:58.762390+0000","last_active":"2026-03-09T18:29:04.394739+0000","last_peered":"2026-03-09T18:29:04.394739+0000","last_clean":"2026-03-09T18:29:04.394739+0000","last_became_active":"2026-03-09T18:28:58.762277+0000","last_became_peered":"2026-03-09T18:28:58.762277+0000","last_unstale":"2026-03-09T18:29:04.394739+0000","last_undegraded":"2026-03-09T18:29:04.394739+0000","last_fullsized":"2026-03-09T18:29:04.394739+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:42:54.836306+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819206+0000","last_change":"2026-03-09T18:28:56.742203+0000","last_active":"2026-03-09T18:29:03.819206+0000","last_peered":"2026-03-09T18:29:03.819206+0000","last_clean":"2026-03-09T18:29:03.819206+0000","last_became_active":"2026-03-09T18:28:56.742090+0000","last_became_peered":"2026-03-09T18:28:56.742090+0000","last_unstale":"2026-03-09T18:29:03.819206+0000","last_undegraded":"2026-03-09T18:29:03.819206+0000","last_fullsized":"2026-03-09T18:29:03.819206+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:11:06.825932+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.839463+0000","last_change":"2026-03-09T18:29:00.817278+0000","last_active":"2026-03-09T18:29:03.839463+0000","last_peered":"2026-03-09T18:29:03.839463+0000","last_clean":"2026-03-09T18:29:03.839463+0000","last_became_active":"2026-03-09T18:29:00.817119+0000","last_became_peered":"2026-03-09T18:29:00.817119+0000","last_unstale":"2026-03-09T18:29:03.839463+0000","last_undegraded":"2026-03-09T18:29:03.839463+0000","last_fullsized":"2026-03-09T18:29:03.839463+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:25:36.253006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.798399+0000","last_change":"2026-03-09T18:29:02.824849+0000","last_active":"2026-03-09T18:29:03.798399+0000","last_peered":"2026-03-09T18:29:03.798399+0000","last_clean":"2026-03-09T18:29:03.798399+0000","last_became_active":"2026-03-09T18:29:02.824578+0000","last_became_peered":"2026-03-09T18:29:02.824578+0000","last_unstale":"2026-03-09T18:29:03.798399+0000","last_undegraded":"2026-03-09T18:29:03.798399+0000","last_fullsized":"2026-03-09T18:29:03.798399+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:47:42.094995+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.826613+0000","last_change":"2026-03-09T18:29:02.825236+0000","last_active":"2026-03-09T18:29:03.826613+0000","last_peered":"2026-03-09T18:29:03.826613+0000","last_clean":"2026-03-09T18:29:03.826613+0000","last_became_active":"2026-03-09T18:29:02.824740+0000","last_became_peered":"2026-03-09T18:29:02.824740+0000","last_unstale":"2026-03-09T18:29:03.826613+0000","last_undegraded":"2026-03-09T18:29:03.826613+0000","last_fullsized":"2026-03-09T18:29:03.826613+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:29:01.787695+0000","last_clean_scrub_stamp":"2026-03-09T18:29:01.787695+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:09:19.985063+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.816471+0000","last_change":"2026-03-09T18:28:56.739495+0000","last_active":"2026-03-09T18:29:03.816471+0000","last_peered":"2026-03-09T18:29:03.816471+0000","last_clean":"2026-03-09T18:29:03.816471+0000","last_became_active":"2026-03-09T18:28:56.739397+0000","last_became_peered":"2026-03-09T18:28:56.739397+0000","last_unstale":"2026-03-09T18:29:03.816471+0000","last_undegraded":"2026-03-09T18:29:03.816471+0000","last_fullsized":"2026-03-09T18:29:03.816471+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:55.698736+0000","last_clean_scrub_stamp":"2026-03-09T18:28:55.698736+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:22:08.981940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":40,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:04.672027+0000","last_change":"2026-03-09T18:28:58.759750+0000","last_active":"2026-03-09T18:29:04.672027+0000","last_peered":"2026-03-09T18:29:04.672027+0000","last_clean":"2026-03-09T18:29:04.672027+0000","last_became_active":"2026-03-09T18:28:58.759643+0000","last_became_peered":"2026-03-09T18:28:58.759643+0000","last_unstale":"2026-03-09T18:29:04.672027+0000","last_undegraded":"2026-03-09T18:29:04.672027+0000","last_fullsized":"2026-03-09T18:29:04.672027+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:57.719732+0000","last_clean_scrub_stamp":"2026-03-09T18:28:57.719732+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:31:10.636369+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T18:29:03.819691+0000","last_change":"2026-03-09T18:29:00.820327+0000","last_active":"2026-03-09T18:29:03.819691+0000","last_peered":"2026-03-09T18:29:03.819691+0000","last_clean":"2026-03-09T18:29:03.819691+0000","last_became_active":"2026-03-09T18:29:00.820189+0000","last_became_peered":"2026-03-09T18:29:00.820189+0000","last_unstale":"2026-03-09T18:29:03.819691+0000","last_undegraded":"2026-03-09T18:29:03.819691+0000","last_fullsized":"2026-03-09T18:29:03.819691+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:28:59.730110+0000","last_clean_scrub_stamp":"2026-03-09T18:28:59.730110+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:15:24.166473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593733,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27880,"kb_used_data":1048,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939544,"statfs":{"total":21470642176,"available":21442093056,"internally_reserved":0,"allocated":1073152,"data_stored":705250,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27860,"kb_used_data":1024,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939564,"statfs":{"total":21470642176,"available":21442113536,"internally_reserved":0,"allocated":1048576,"data_stored":704152,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[0,0,0,0,0,2],"upper_bound":64},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920778,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27424,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940000,"statfs":{"total":21470642176,"available":21442560000,"internally_reserved":0,"allocated":602112,"data_stored":252587,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":4,"up_from":27,"seq":115964117004,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27448,"kb_used_data":612,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939976,"statfs":{"total":21470642176,"available":21442535424,"internally_reserved":0,"allocated":626688,"data_stored":246919,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27460,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939964,"statfs":{"total":21470642176,"available":21442523136,"internally_reserved":0,"allocated":634880,"data_stored":247205,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476751,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27400,"kb_used_data":568,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940024,"statfs":{"total":21470642176,"available":21442584576,"internally_reserved":0,"allocated":581632,"data_stored":245020,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607569,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":651264,"data_stored":246657,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":5,"apply_latency_ms":5,"commit_latency_ns":5000000,"apply_latency_ns":5000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738387,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27892,"kb_used_data":1056,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939532,"statfs":{"total":21470642176,"available":21442080768,"internally_reserved":0,"allocated":1081344,"data_stored":706375,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1613,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T18:29:10.066 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T18:29:10.066 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T18:29:10.066 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T18:29:10.066 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph health --format=json 2026-03-09T18:29:10.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:09 vm09 ceph-mon[54744]: pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 188 op/s 2026-03-09T18:29:10.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1829599515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:29:10.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2656405828' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:29:10.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/413118903' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:29:10.292 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:29:10.563 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:29:10.563 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T18:29:10.633 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T18:29:10.634 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T18:29:10.634 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T18:29:10.639 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T18:29:10.639 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T18:29:10.639 DEBUG:teuthology.orchestra.run.vm04:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T18:29:10.654 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:29:10.654 INFO:teuthology.orchestra.run.vm04.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T18:29:10.654 DEBUG:teuthology.orchestra.run.vm04:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T18:29:10.710 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T18:29:10.710 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T18:29:10.766 INFO:tasks.workunit:timeout=1h 2026-03-09T18:29:10.766 INFO:tasks.workunit:cleanup=True 2026-03-09T18:29:10.766 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T18:29:10.826 INFO:tasks.workunit.client.0.vm04.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T18:29:10.834 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 systemd[1]: Starting Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[51427]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[51427]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3047453196' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[57581]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[57581]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:10.837 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:10 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3047453196' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 podman[78687]: 2026-03-09 18:29:10.85558328 +0000 UTC m=+0.022049959 container create 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 podman[78687]: 2026-03-09 18:29:10.892467996 +0000 UTC m=+0.058934686 container init 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 podman[78687]: 2026-03-09 18:29:10.895411805 +0000 UTC m=+0.061878484 container start 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 bash[78687]: 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 podman[78687]: 2026-03-09 18:29:10.847608459 +0000 UTC m=+0.014075138 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 systemd[1]: Started Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.927Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.927Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.927Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm09 (none))" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.927Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.927Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.932Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.933Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.052µs 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.935Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=15.348µs wal_replay_duration=97.002µs wbl_replay_duration=160ns total_replay_duration=127.227µs 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.936Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.936Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.936Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.948Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=11.954709ms db_storage=1.002µs remote_storage=1.692µs web_handler=161ns query_engine=841ns scrape=557.473µs scrape_sd=57.909µs notify=731ns notify_sd=671ns rules=11.022395ms tracing=7.445µs 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.948Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:29:11.109 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:10 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:10.948Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:29:11.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:10 vm09 ceph-mon[54744]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:11.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:10 vm09 ceph-mon[54744]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:29:11.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:10 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3047453196' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.7 KiB/s wr, 143 op/s 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.7 KiB/s wr, 143 op/s 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:11 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setuser ceph since I am not root 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ignoring --setgroup ceph since I am not root 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:12.072+0000 7fac63cde140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:29:12.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:12.120+0000 7fac63cde140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:29:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.7 KiB/s wr, 143 op/s 2026-03-09T18:29:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:11 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: ignoring --setuser ceph since I am not root 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: ignoring --setgroup ceph since I am not root 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-mgr[55966]: -- 192.168.123.109:0/533976028 <== mon.2 v2:192.168.123.104:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x56399b83f4a0 con 0x56399b81d000 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:12.087+0000 7f632fc88140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:29:12.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:12.131+0000 7f632fc88140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:29:12.858 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:12.557+0000 7f632fc88140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:29:12.883 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:12.538+0000 7fac63cde140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:12 vm04 ceph-mon[57581]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:12 vm04 ceph-mon[57581]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:12 vm04 ceph-mon[51427]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:12 vm04 ceph-mon[51427]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:12.883+0000 7fac63cde140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: from numpy import show_config as show_numpy_config 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:12 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:12.978+0000 7fac63cde140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.018+0000 7fac63cde140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:29:13.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.094+0000 7fac63cde140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:29:13.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:12 vm09 ceph-mon[54744]: from='mgr.14150 192.168.123.104:0/1975983173' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:29:13.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:12 vm09 ceph-mon[54744]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T18:29:13.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:12.903+0000 7f632fc88140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:12 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: from numpy import show_config as show_numpy_config 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.001+0000 7f632fc88140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.043+0000 7f632fc88140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:29:13.359 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.122+0000 7f632fc88140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:29:13.926 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.649+0000 7f632fc88140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:29:13.927 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.765+0000 7f632fc88140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:13.927 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.806+0000 7f632fc88140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:29:13.927 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.843+0000 7f632fc88140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:13.927 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.886+0000 7f632fc88140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:29:13.933 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.643+0000 7fac63cde140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:29:13.934 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.766+0000 7fac63cde140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:13.934 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.810+0000 7fac63cde140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:29:13.934 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.849+0000 7fac63cde140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:13.934 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.893+0000 7fac63cde140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:29:13.934 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:13 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:13.934+0000 7fac63cde140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:29:14.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.120+0000 7fac63cde140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:29:14.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.173+0000 7fac63cde140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:14.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:13 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:13.925+0000 7f632fc88140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:29:14.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.109+0000 7f632fc88140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:29:14.358 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.160+0000 7f632fc88140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:14.700 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.392+0000 7f632fc88140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:29:14.717 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.425+0000 7fac63cde140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:29:15.001 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.698+0000 7f632fc88140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:29:15.001 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.743+0000 7f632fc88140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:29:15.001 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.793+0000 7f632fc88140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:29:15.001 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.870+0000 7f632fc88140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:29:15.001 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:14 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.909+0000 7f632fc88140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.739+0000 7fac63cde140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.787+0000 7fac63cde140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.833+0000 7fac63cde140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.918+0000 7fac63cde140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:14 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:14.961+0000 7fac63cde140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:29:15.051 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:15.050+0000 7fac63cde140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:29:15.271 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:14.999+0000 7f632fc88140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:29:15.271 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:15.119+0000 7f632fc88140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:15.327 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:15.177+0000 7fac63cde140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:15.327 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:15.326+0000 7fac63cde140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: Standby manager daemon x restarted 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: Standby manager daemon x started 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:29:15.539 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:15 vm09 ceph-mon[54744]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:15.269+0000 7f632fc88140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:15.311+0000 7f632fc88140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: [09/Mar/2026:18:29:15] ENGINE Bus STARTING 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: CherryPy Checker: 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: The Application mounted at '' has an empty config. 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: [09/Mar/2026:18:29:15] ENGINE Serving on http://:::9283 2026-03-09T18:29:15.540 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 18:29:15 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-x[55962]: [09/Mar/2026:18:29:15] ENGINE Bus STARTED 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: Standby manager daemon x restarted 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: Standby manager daemon x started 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[51427]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:15.368+0000 7fac63cde140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:15] ENGINE Bus STARTING 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: CherryPy Checker: 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: The Application mounted at '' has an empty config. 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: Standby manager daemon x restarted 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: Standby manager daemon x started 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:29:15.579 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:15 vm04 ceph-mon[57581]: from='mgr.? 192.168.123.109:0/2330620404' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:15.968 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:15] ENGINE Serving on http://:::9283 2026-03-09T18:29:15.968 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:15 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:15] ENGINE Bus STARTED 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: Active manager daemon y restarted 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: Activating manager daemon y 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: mgrmap e19: y(active, starting, since 0.0292639s), standbys: x 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: Manager daemon y is now available 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:16.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: Active manager daemon y restarted 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: Activating manager daemon y 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: mgrmap e19: y(active, starting, since 0.0292639s), standbys: x 2026-03-09T18:29:16.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: Manager daemon y is now available 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: Active manager daemon y restarted 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: Activating manager daemon y 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: mgrmap e19: y(active, starting, since 0.0292639s), standbys: x 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: Manager daemon y is now available 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:16.719 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: mgrmap e20: y(active, since 1.0489s), standbys: x 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: mgrmap e20: y(active, since 1.0489s), standbys: x 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:17.718 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:17 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: mgrmap e20: y(active, since 1.0489s), standbys: x 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:17.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:17.859 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:17 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: [09/Mar/2026:18:29:17] ENGINE Bus STARTING 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: [09/Mar/2026:18:29:17] ENGINE Client ('192.168.123.104', 54186) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: [09/Mar/2026:18:29:17] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: [09/Mar/2026:18:29:17] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: [09/Mar/2026:18:29:17] ENGINE Bus STARTED 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:29:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: [09/Mar/2026:18:29:17] ENGINE Bus STARTING 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: [09/Mar/2026:18:29:17] ENGINE Client ('192.168.123.104', 54186) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: [09/Mar/2026:18:29:17] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: [09/Mar/2026:18:29:17] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: [09/Mar/2026:18:29:17] ENGINE Bus STARTED 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:18.968 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:18 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: [09/Mar/2026:18:29:17] ENGINE Bus STARTING 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: [09/Mar/2026:18:29:17] ENGINE Client ('192.168.123.104', 54186) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: [09/Mar/2026:18:29:17] ENGINE Serving on https://192.168.123.104:7150 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: [09/Mar/2026:18:29:17] ENGINE Serving on http://192.168.123.104:8765 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: [09/Mar/2026:18:29:17] ENGINE Bus STARTED 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.conf 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:18 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[57581]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[57581]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[57581]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[57581]: Deploying daemon alertmanager.a on vm04 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[51427]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[51427]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[51427]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:19.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:19 vm04 ceph-mon[51427]: Deploying daemon alertmanager.a on vm04 2026-03-09T18:29:20.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:19 vm09 ceph-mon[54744]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:20.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:19 vm09 ceph-mon[54744]: Updating vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:20.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:19 vm09 ceph-mon[54744]: Updating vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/config/ceph.client.admin.keyring 2026-03-09T18:29:20.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:19 vm09 ceph-mon[54744]: Deploying daemon alertmanager.a on vm04 2026-03-09T18:29:20.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:20 vm04 ceph-mon[57581]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:20.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:20 vm04 ceph-mon[57581]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T18:29:20.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:20 vm04 ceph-mon[51427]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:20.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:20 vm04 ceph-mon[51427]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T18:29:21.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:20 vm09 ceph-mon[54744]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:21.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:20 vm09 ceph-mon[54744]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T18:29:22.322 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 systemd[1]: Starting Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:22.688 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:22.688 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.688 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.688 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.688 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.689 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:22] ENGINE Bus STOPPING 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 podman[85541]: 2026-03-09 18:29:22.322451818 +0000 UTC m=+0.019147330 volume create 73afc499601b2a82e488f4e550b59aaacbd11b6ecce37fa5fbb1acda3900219e 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 podman[85541]: 2026-03-09 18:29:22.327604974 +0000 UTC m=+0.024300486 container create 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 podman[85541]: 2026-03-09 18:29:22.35881397 +0000 UTC m=+0.055509493 container init 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 podman[85541]: 2026-03-09 18:29:22.361113425 +0000 UTC m=+0.057808937 container start 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 bash[85541]: 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 podman[85541]: 2026-03-09 18:29:22.31430493 +0000 UTC m=+0.011000451 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 systemd[1]: Started Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.379Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.379Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.380Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.104 port=9094 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.380Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.426Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.427Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.428Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T18:29:22.689 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:22.428Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T18:29:22.689 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:22] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:22] ENGINE Bus STOPPED 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:22 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:22] ENGINE Bus STARTING 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:22.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:22 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:23.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:23.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:22 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:23.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:23 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:23] ENGINE Serving on http://:::9283 2026-03-09T18:29:23.467 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:23 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:23] ENGINE Bus STARTED 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[51427]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[51427]: Deploying daemon grafana.a on vm09 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[57581]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:23.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:23 vm04 ceph-mon[57581]: Deploying daemon grafana.a on vm09 2026-03-09T18:29:24.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:23 vm09 ceph-mon[54744]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:24.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:23 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:24.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:23 vm09 ceph-mon[54744]: Deploying daemon grafana.a on vm09 2026-03-09T18:29:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:24 vm04 ceph-mon[51427]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:24.717 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:24 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:24.380Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000271743s 2026-03-09T18:29:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:24 vm04 ceph-mon[57581]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:25.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:24 vm09 ceph-mon[54744]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:26 vm09 ceph-mon[54744]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:26 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:26.859 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:29:26.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:26 vm04 ceph-mon[57581]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:26.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:26 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:26.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:26 vm04 ceph-mon[51427]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:26.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:26 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:28 vm09 ceph-mon[54744]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:28.905 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:28 vm04 ceph-mon[51427]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:28.906 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:28 vm04 ceph-mon[57581]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:28.906 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:29:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:29:29.862 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 systemd[1]: Starting Ceph grafana.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:30.162 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 podman[80208]: 2026-03-09 18:29:29.908805382 +0000 UTC m=+0.024705055 container create 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a, maintainer=Grafana Labs ) 2026-03-09T18:29:30.162 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 podman[80208]: 2026-03-09 18:29:29.954658233 +0000 UTC m=+0.070557917 container init 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a, maintainer=Grafana Labs ) 2026-03-09T18:29:30.162 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 podman[80208]: 2026-03-09 18:29:29.958475959 +0000 UTC m=+0.074375632 container start 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a, maintainer=Grafana Labs ) 2026-03-09T18:29:30.162 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 bash[80208]: 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 2026-03-09T18:29:30.162 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 podman[80208]: 2026-03-09 18:29:29.899324302 +0000 UTC m=+0.015223984 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:29 vm09 systemd[1]: Started Ceph grafana.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.099817474Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T18:29:30Z 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100467736Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100476352Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100479969Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100482293Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100484347Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100490098Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100492042Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100494767Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100497321Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100499445Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.10069835Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100701055Z level=info msg=Target target=[all] 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100749986Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100752331Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100754194Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100756258Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.10083142Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=settings t=2026-03-09T18:29:30.100834134Z level=info msg="App mode production" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=sqlstore t=2026-03-09T18:29:30.101653525Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=sqlstore t=2026-03-09T18:29:30.101701545Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.10222587Z level=info msg="Starting DB migrations" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.103256006Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.104063554Z level=info msg="Migration successfully executed" id="create migration_log table" duration=807.297µs 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.105275713Z level=info msg="Executing migration" id="create user table" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.105857446Z level=info msg="Migration successfully executed" id="create user table" duration=582.083µs 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.106591807Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.10704082Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=448.924µs 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.107731198Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T18:29:30.163 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.108208044Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=476.524µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.108919531Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.10935498Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=434.988µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.109906306Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.110251354Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=345.34µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.110709595Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.111751303Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.040355ms 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.112238699Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.112878852Z level=info msg="Migration successfully executed" id="create user table v2" duration=638.891µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.113368112Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.113780798Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=412.847µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.114228589Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.114589117Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=360.587µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.115153908Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.115365436Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=211.447µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.11580405Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.116100588Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=296.357µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.116576673Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.117134291Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=557.568µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.117609534Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.117650832Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.909µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.118187791Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.118893537Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=705.476µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.119463628Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.119692358Z level=info msg="Migration successfully executed" id="Add missing user data" duration=227.688µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.120334125Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.121162501Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=811.997µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.121852729Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.122364982Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=512.735µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.122985187Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.123697617Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=712.861µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.124162461Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.127517075Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.35281ms 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.128082147Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.128645857Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=564.07µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.129116511Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.129239693Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=123.603µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.129705659Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.13005724Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=350.118µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.130693786Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.131160042Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=466.026µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.131769849Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.132112373Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=343.044µs 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.132872611Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T18:29:30.164 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.133208312Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=336.433µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.133878311Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.13422326Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=346.362µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.134758546Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.13509074Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=332.084µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.135664298Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.135696288Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.31µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.136177714Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.136524635Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=348.324µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.136950265Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.137284694Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=334.438µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.137715824Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.138052277Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=336.403µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.138458861Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.138839907Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=380.976µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.139303689Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.140667011Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.362781ms 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.141151582Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.141564358Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=412.445µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.142006298Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.142388306Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=381.817µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.14287428Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.14324119Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=366.869µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.143696294Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.144042545Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=346.351µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.144477573Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.14487016Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=392.607µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.145416247Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.145685494Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=268.634µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.146092779Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.146385609Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=292.862µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.146854681Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.147073863Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=218.551µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.147592828Z level=info msg="Executing migration" id="create star table" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.147914342Z level=info msg="Migration successfully executed" id="create star table" duration=321.203µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.148355111Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.148740595Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=385.505µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.149290148Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.14966343Z level=info msg="Migration successfully executed" id="create org table v1" duration=372.201µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.150216159Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.150616572Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=400.733µs 2026-03-09T18:29:30.165 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.151496155Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.15183966Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=343.947µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.152692102Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.153078849Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=386.737µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.153609666Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.153986054Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=374.995µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.154570934Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.15492523Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=354.025µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.155484541Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.155533473Z level=info msg="Migration successfully executed" id="Update org table charset" duration=49.172µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.15598475Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.156017513Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=33.074µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.156385404Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.156489019Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=103.866µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.157211046Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.157622399Z level=info msg="Migration successfully executed" id="create dashboard table" duration=410.011µs 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.158160251Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T18:29:30.166 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.158594936Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=435.006µs 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.159238747Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.159680978Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=443.624µs 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.160227636Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.160583234Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=355.659µs 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.161113972Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.161483426Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=370.446µs 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.162023852Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T18:29:30.417 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.162372677Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=349.065µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.162821932Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.16562432Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.799431ms 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.166239796Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.166756968Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=516.1µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.167314545Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.167812071Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=499.149µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.168624768Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.169058624Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=433.775µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.169807991Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.170041581Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=233.519µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.170497749Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.171095543Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=597.763µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.171633935Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.171690781Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=57.427µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.172274748Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.17311603Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=829.959µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.173646146Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.174412016Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=766.211µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.17492502Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.175672073Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=746.813µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.176148328Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.176555454Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=407.046µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.177038862Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.177707579Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=668.617µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.178182562Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.178592523Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=409.75µs 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.179050554Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T18:29:30.418 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.179408837Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=358.082µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.180035626Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.180067986Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.771µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.180586471Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.180618902Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=32.731µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.181132336Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.181908886Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=776.339µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.18237976Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.183135982Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=756.182µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.183618469Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.184290131Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=671.753µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.184720711Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.185488845Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=768.355µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.186007359Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.186177459Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=170.451µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.186803275Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.18731674Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=513.635µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.1882428Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.188730537Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=486.374µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.189224806Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.189258059Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=44.213µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.189826938Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.19025905Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=432.102µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.190739533Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.191102205Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=362.531µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.191691833Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.193430602Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.738198ms 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.194061076Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.194397829Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=336.844µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.195085241Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.195455367Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=370.296µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.196108234Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.196475554Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=367.432µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.197045556Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.197225203Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=178.014µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.197657756Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.197942903Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=285.267µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.19837326Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.199141745Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=769.566µs 2026-03-09T18:29:30.419 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.199625193Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.199965934Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=341µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.200397736Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.200498114Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=100.559µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.201039132Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.201138369Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=99.497µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.201608462Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.201970903Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=362.292µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.202599896Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.203433943Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=834.138µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.203892254Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.20431573Z level=info msg="Migration successfully executed" id="create data_source table" duration=423.196µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.20491169Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.205270204Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=358.393µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.205829285Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.206182979Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=352.252µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.206728144Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.207099222Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=371.418µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.207574426Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.207923682Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=349.367µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.208367797Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.210302753Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.934867ms 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.210928559Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.211772917Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=844.007µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.212309574Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.212795889Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=487.196µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.213276282Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.213782032Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=505.739µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.214557379Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.214923719Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=366.54µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.215468572Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.216392098Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=923.656µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.216878533Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.217696479Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=818.177µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.218120457Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.218155392Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=35.326µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.218642488Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.218761471Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=119.534µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.219317456Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.220124633Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=806.917µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.220599206Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.220716135Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=117.16µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.221177012Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.221310703Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=135.013µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.221690096Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.222503184Z level=info msg="Migration successfully executed" id="Add uid column" duration=813.168µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.222952678Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.223068536Z level=info msg="Migration successfully executed" id="Update uid value" duration=116.448µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.22357585Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.223965482Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=389.872µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.22441093Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.224805501Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=394.281µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.225360455Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.225770074Z level=info msg="Migration successfully executed" id="create api_key table" duration=409.578µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.226329335Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.226808836Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=479.21µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.227381553Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.227774492Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=392.91µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.228331298Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.228752811Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=421.522µs 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.229574304Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T18:29:30.420 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.229962644Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=388.309µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.230387432Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.230799558Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=411.955µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.231272456Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.231672638Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=400.392µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.232134976Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.234282363Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.146134ms 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.235056759Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.235492928Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=435.969µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.236051198Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.236476087Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=423.677µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.237017804Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.237490033Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=472.229µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.238025719Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.238444787Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=419.088µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.23909018Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.239344778Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=254.438µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.239864435Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.240203733Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=338.726µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.240650593Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.240685398Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=35.406µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.241191899Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.242071473Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=879.603µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.242534463Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.24338357Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=849.087µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.243988987Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.244094486Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=105.468µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.244594615Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.245472776Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=877.81µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.246009204Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.246882756Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=873.552µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.247398465Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.24778934Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=391.316µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.248254715Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.248568223Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=313.678µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.249060409Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.249448818Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=388.481µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.249943409Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.25033284Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=389.312µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.250899766Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.251366944Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=466.848µs 2026-03-09T18:29:30.421 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.251944549Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.252410565Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=465.755µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.253034518Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.253085805Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=51.747µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.253607555Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.253643062Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=36.218µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.254123284Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.255044507Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=921.183µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.255533455Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.256467761Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=935.848µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.257136739Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.257187082Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=50.766µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.257724102Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.258176332Z level=info msg="Migration successfully executed" id="create quota table v1" duration=452.199µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.258716097Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.259088476Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=372.22µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.259711468Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.259752996Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=41.858µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.260236073Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.260656143Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=419.91µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.261188564Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.261601951Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=413.276µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.262132117Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.2631444Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.012403ms 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.263656403Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.263694634Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=37.37µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.264190686Z level=info msg="Executing migration" id="create session table" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.264622528Z level=info msg="Migration successfully executed" id="create session table" duration=432.182µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.265245289Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.265312485Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=67.817µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.26579983Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.265864402Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=64.823µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.266353341Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.266744497Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=389.112µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.267314538Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.267683982Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=369.133µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.268249094Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.268282667Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=34.384µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.268802094Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.268837059Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=35.496µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.269326359Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.270421487Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.095118ms 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.270964618Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.27199249Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.025798ms 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.272457755Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.27255071Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=93.366µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.273032023Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.273090834Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=58.971µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.273474045Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.273865Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=391.055µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.274430643Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.274464115Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=33.673µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.27499329Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.276033365Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.039824ms 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.276497557Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.276635717Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=138.24µs 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.277133012Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.278161044Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.028001ms 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.278723532Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T18:29:30.422 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.279806667Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.082704ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.280290566Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.280351572Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=61.295µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.281284635Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.281738658Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=454.054µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.282316304Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.282779615Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=463.411µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.283343304Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.283901584Z level=info msg="Migration successfully executed" id="create alert table v1" duration=558.2µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.284501521Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.284950376Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=448.803µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.285545143Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.285912433Z level=info msg="Migration successfully executed" id="add index alert state" duration=367.02µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.286450775Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.286862518Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=411.664µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.287411721Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.287761478Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=349.668µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.288262841Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.288699141Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=436.661µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.289244946Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.289665116Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=420.411µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.291150648Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.294023839Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.872459ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.294541271Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.294891219Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=348.274µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.29537079Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.295878976Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=507.564µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.296406588Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.296604058Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=197.199µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.297053884Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.297347295Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=293.091µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.297833489Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.298182715Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=349.136µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.298612423Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.29978155Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.169057ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.300257786Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.301583738Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.325601ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.302079199Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.30339953Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.320061ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.303906804Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.305044483Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.137498ms 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.305579187Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.30600654Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=459.715µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.306557205Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.306592331Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=35.546µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.307064729Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.307130773Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=66.434µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.30755981Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.307915768Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=356.19µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.308478286Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.308914697Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=437.553µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.309457517Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.309861576Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=403.998µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.310364402Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.310747651Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=383.411µs 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.311203408Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T18:29:30.423 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.311644888Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=441.43µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.312090976Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.313253531Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.162445ms 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.31374225Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.314915204Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.172754ms 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.315425964Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.315560137Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=134.413µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.316053815Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.316462653Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=408.688µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.317063713Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.317524419Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=460.735µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.317985325Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.319161415Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.176241ms 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.319659882Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.319724554Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=65.252µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.320218011Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.320648741Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=430.81µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.321110829Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.321593587Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=482.738µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.322417865Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.322492937Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=75.653µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.322986785Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.323428536Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=441.5µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.324216426Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.324698983Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=482.476µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.325283301Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.325706517Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=423.226µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.326284713Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.326710774Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=425.951µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.327298008Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.327772089Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=473.819µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.328336209Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.328803547Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=467.158µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.329382576Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.329420066Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=38.181µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.32993284Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.331373357Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.439957ms 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.332109021Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.332573023Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=465.355µs 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.333064106Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.334270603Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.206247ms 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.334739585Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T18:29:30.424 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.335129548Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=389.532µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.335664413Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.336092287Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=427.514µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.33665218Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.337087168Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=434.978µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.33758337Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.340861591Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.277701ms 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.341364426Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.341746434Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=383.421µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.34220698Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.342662456Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=455.035µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.34325537Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.343438385Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=183.024µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.34389916Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.344199234Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=300.003µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.344653858Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.344770057Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=117.871µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.345268434Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.346605818Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.33607ms 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.347090028Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.348348783Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.258234ms 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.348854925Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.349290894Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=435.859µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.34977271Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.350184894Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=412.415µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.350732574Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.350873528Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=142.227µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.351382285Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.352803736Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.42089ms 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.353281856Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.353713537Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=431.721µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.354357737Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.354477102Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=119.775µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.355009382Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.355234666Z level=info msg="Migration successfully executed" id="Move region to single row" duration=225.655µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.355722292Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.356177237Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=454.986µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.356660055Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.357087699Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=427.584µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.357571437Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.358013188Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=441.33µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.358584942Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.359006895Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=421.722µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.359480826Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.359934449Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=450.326µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.360394704Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.360832226Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=437.462µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.36127029Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.361332517Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=62.658µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.362016472Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.362412767Z level=info msg="Migration successfully executed" id="create test_data table" duration=396.073µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.363006763Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.363387048Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=380.184µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.363941651Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.364355739Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=411.734µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.365436451Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.365901815Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=464.923µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.366532019Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.366656344Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=124.544µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.367140323Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.367348394Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=206.648µs 2026-03-09T18:29:30.425 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.367709352Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.367771339Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=62.376µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.368261661Z level=info msg="Executing migration" id="create team table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.368680347Z level=info msg="Migration successfully executed" id="create team table" duration=418.597µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.369242544Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.36974593Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=503.276µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.370585678Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.371012992Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=426.923µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.371585758Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.372966412Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.380504ms 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.373446254Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.373582822Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=136.847µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.374072932Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.374527647Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=452.621µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.375091086Z level=info msg="Executing migration" id="create team member table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.375464628Z level=info msg="Migration successfully executed" id="create team member table" duration=373.351µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.376062001Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.376475298Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=413.948µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.377086537Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.377524239Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=438.073µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.378115451Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.378552642Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=437.251µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.37916281Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.380866481Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.703351ms 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.381390405Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.382944216Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.554842ms 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.383738699Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.385122259Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.383229ms 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.385619454Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.386072907Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=454.043µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.38670782Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.387180088Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=473.5µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.387857311Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.388353072Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=491.594µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.389015998Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.389455304Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=439.286µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.390031558Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.390458861Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=427.063µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.391322393Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.391768422Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=444.144µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.392389229Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.39286333Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=474.031µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.393467605Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.393958438Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=491.013µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.394541875Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.394827822Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=285.907µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.395421848Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.395584324Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=162.536µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.396086267Z level=info msg="Executing migration" id="create tag table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.396479386Z level=info msg="Migration successfully executed" id="create tag table" duration=393.199µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.397064265Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.397491869Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=427.503µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.398099791Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.398431786Z level=info msg="Migration successfully executed" id="create login attempt table" duration=330.873µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.399048665Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.399486288Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=435.89µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.400097838Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.400555587Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=457.961µs 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.40110005Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.405186482Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.085219ms 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.405746193Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T18:29:30.426 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.406104677Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=358.464µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.406626036Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.407052849Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=426.752µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.407646485Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.407829639Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=183.315µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.408360307Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.408693943Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=333.666µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.409131095Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.40946898Z level=info msg="Migration successfully executed" id="create user auth table" duration=336.382µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.409938702Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.410409158Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=470.295µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.411222587Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.411284993Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=62.566µs 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.411855395Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.413425026Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.56939ms 2026-03-09T18:29:30.427 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.413933562Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.41548655Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.553099ms 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.41600756Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.417623406Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.615777ms 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.418257309Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.42024829Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.989959ms 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.420901037Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T18:29:30.677 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.4214664Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=565.923µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.422222411Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.42426545Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.044713ms 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.424920742Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.425323549Z level=info msg="Migration successfully executed" id="create server_lock table" duration=403.007µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.42598382Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.426437363Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=451.649µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.427129303Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.42755308Z level=info msg="Migration successfully executed" id="create user auth token table" duration=423.596µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.428163597Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.428655121Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=491.485µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.429298269Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.429759697Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=461.378µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.430638749Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.431145553Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=506.933µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.431871737Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.43362397Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.751872ms 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.434163674Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.434636723Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=473.259µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.43533699Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.435784031Z level=info msg="Migration successfully executed" id="create cache_data table" duration=447.872µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.436533448Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.43697557Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=442.543µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.437618568Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.438022277Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=403.667µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.438708636Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.439176406Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=467.579µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.439760193Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.439819484Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=59.481µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.440356995Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.440429342Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=72.768µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.440975497Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.441370861Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=395.243µs 2026-03-09T18:29:30.678 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.442038015Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.442503139Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=466.687µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.443130097Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.443606002Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=475.635µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.444202583Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.444261693Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=59.521µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.444801899Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.445278555Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=476.746µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.445760161Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.446300235Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=540.776µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.446845811Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.447311926Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=466.157µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.447843045Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.448309081Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=465.916µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.448817307Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.450561285Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.743266ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.45114933Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.451694524Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=544.995µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.452242214Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.452315321Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=73.588µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.452944444Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.45337319Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=428.937µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.453920699Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.454393617Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=472.498µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.454906841Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.455398466Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=491.715µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.455904287Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.455965061Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=61.356µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.456470331Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.4569808Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=510.38µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.457553236Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.45798127Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=427.793µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.458500776Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.458979867Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=478.9µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.459530723Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.459977042Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=446.199µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.460599561Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.462612134Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=2.013616ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.463183408Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.463650225Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=467.459µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.464198055Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.464654412Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=456.648µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.465137802Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.473621807Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=8.479597ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.474353162Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.482781172Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=8.404876ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.483543986Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.48410451Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=561.215µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.484703906Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.48527027Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=566.155µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.48603057Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.488087746Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.056595ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.488671082Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.490424268Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.752695ms 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.491017933Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.491477708Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=459.904µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.492102301Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.492629903Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=527.441µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.493736783Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.494212477Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=475.654µs 2026-03-09T18:29:30.679 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.494897455Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.495440726Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=541.308µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.496170257Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.496272339Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=103.845µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.496822213Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.49872584Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.903388ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.499242902Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.501071109Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.828067ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.501660035Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.503582929Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.922845ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.504079763Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.504612444Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=532.752µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.505170885Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.505806799Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=636.106µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.506349769Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.508192825Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.842515ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.508675401Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.510476637Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.801065ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.511036459Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.511546848Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=510.339µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.512194315Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.513991323Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.796727ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.514482686Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.516287107Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.804332ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.516827663Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.516891904Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=64.712µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.517413685Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.517933241Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=519.606µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.518586429Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.519058186Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=470.263µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.519657783Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.520150368Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=492.395µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.520724106Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.520783689Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=60.174µs 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.521264152Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.523221651Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.957368ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.523732452Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.525561629Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.829038ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.526594511Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.528447162Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.850428ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.528973181Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.530799163Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.825961ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.531266752Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.533091571Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.824629ms 2026-03-09T18:29:30.680 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.533588737Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.533651214Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=62.957µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.534169818Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.534556275Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=386.446µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.53518166Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.53711297Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.93138ms 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.537623128Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.537686227Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=64.302µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.538235179Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.540144588Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.909107ms 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.540707336Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.541179143Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=472.219µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.541823984Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.543745756Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.921562ms 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.544287995Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.544680765Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=392.849µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.545270362Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.54574791Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=477.187µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.546410685Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.548933567Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.517781ms 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.549665583Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.550179539Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=513.826µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.550926493Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.551593938Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=667.134µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.552304714Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.552843676Z level=info msg="Migration successfully executed" id="create alert_image table" duration=538.863µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.553843425Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.554448663Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=605.267µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.555429696Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.555540575Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=111.079µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.556158517Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.556691618Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=532.941µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.55729331Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.557764184Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=470.925µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.558277419Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.558465923Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.559066632Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.559364993Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=300.014µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.559922772Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.56036316Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=440.408µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.560872216Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.562930034Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.057176ms 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.563533378Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.564153664Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=649.57µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.56488037Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.565431836Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=552.459µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.566092067Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.566484745Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=392.959µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.567154333Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.567664953Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=509.818µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.568211199Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.568698846Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=487.636µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.569402138Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.569435652Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=34.043µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.569983161Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.570042993Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=59.892µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.570471307Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.570672887Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=190.839µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.571254189Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.571720014Z level=info msg="Migration successfully executed" id="create data_keys table" duration=465.785µs 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.572349618Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T18:29:30.681 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.572740383Z level=info msg="Migration successfully executed" id="create secrets table" duration=390.634µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.573315934Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.583117928Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.761637ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.583828613Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.58612005Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.291927ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.586718194Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.586840955Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=124.174µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.587383634Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.598146283Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.74199ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.59897401Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.609307351Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.329815ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.610245444Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.610764921Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=520.228µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.611458143Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.612021893Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=563.819µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.612704967Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.612864106Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=159.589µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.613475646Z level=info msg="Executing migration" id="create permission table" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.61394589Z level=info msg="Migration successfully executed" id="create permission table" duration=468.481µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.614578588Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.615043181Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=464.443µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.615924518Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.6165903Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=666.273µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.617236575Z level=info msg="Executing migration" id="create role table" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.617727898Z level=info msg="Migration successfully executed" id="create role table" duration=491.353µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.618323276Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.621993705Z level=info msg="Migration successfully executed" id="add column display_name" duration=3.667814ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.622727234Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.625061431Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.334408ms 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.625627245Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.626219507Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=592.483µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.626964348Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.627486248Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=521.98µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.628115862Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.628624107Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=509.466µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.629181164Z level=info msg="Executing migration" id="create team role table" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.629594792Z level=info msg="Migration successfully executed" id="create team role table" duration=413.648µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.630175343Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.630841214Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=665.581µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.63165305Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.632351964Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=698.853µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.633119106Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.633811527Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=692.231µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.634614116Z level=info msg="Executing migration" id="create user role table" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.635179759Z level=info msg="Migration successfully executed" id="create user role table" duration=565.633µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.635901194Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.636385114Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=484.15µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.637014928Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.637475603Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=460.586µs 2026-03-09T18:29:30.682 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.638118102Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.638596541Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=478.339µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.639168565Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.639588806Z level=info msg="Migration successfully executed" id="create builtin role table" duration=420.321µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.640379352Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.640853703Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=474.371µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.641376065Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.641925027Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=548.722µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.642457577Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.644905278Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.445307ms 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.645388827Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.645871314Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=483.999µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.646476271Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.647000566Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=524.015µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.647616905Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.64809323Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=476.385µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.648536523Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.648990887Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=454.404µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.649455059Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.649819384Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=364.314µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.650363116Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.650899374Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=536.378µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.65162091Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.654086475Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.464833ms 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.654570133Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.656849156Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.278633ms 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.657424608Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.659721876Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.296918ms 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.66025609Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.662656011Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.398228ms 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.663195535Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.663726984Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=531.69µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.664304269Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.664856096Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=551.327µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.665424244Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.665919985Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=495.771µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.666372636Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.667180195Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=806.947µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.667702035Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.668170014Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=467.739µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.668756447Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.668816069Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=61.917µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.669316459Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.669354701Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=38.943µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.669920515Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.670166357Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=246.092µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.670740917Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.671061299Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=316.905µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.671664773Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.671985897Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=321.464µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.672580214Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.672706351Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=126.308µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.673167707Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.67344551Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=277.813µs 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.673936603Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T18:29:30.683 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.674341775Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=406.384µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.674903611Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.675406816Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=503.156µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.675986276Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.678740051Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.752914ms 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.679379393Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.679481314Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=104.897µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.680131476Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.680799222Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=667.354µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.681586722Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.682200055Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=613.583µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.682870484Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.683369622Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=499.469µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.684215022Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.686718487Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.503055ms 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.687255357Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.687762158Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=506.863µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.688243393Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.688750476Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=507.053µs 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.689233844Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.695769579Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.531728ms 2026-03-09T18:29:30.943 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.696569102Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.697258978Z level=info msg="Migration successfully executed" id="create correlation v2" duration=690.188µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.697941471Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.698468682Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=527.252µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.699252245Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.699845279Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=593.064µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.700469632Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.700990752Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=521.461µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.701615787Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.701752184Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=136.718µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.702331622Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.702758564Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=427.032µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.703276789Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.70570395Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.426921ms 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.706170778Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.706555741Z level=info msg="Migration successfully executed" id="create entity_events table" duration=385.313µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.707058225Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.707532308Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=473.711µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.708115864Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.708300431Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.708787476Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.70896523Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.70942797Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.709838311Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=410.362µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.710314606Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.711269612Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=954.915µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.711898533Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.712330686Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=432.223µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.712953736Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.713404844Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=449.695µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.714020672Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.714459878Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=439.226µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.714931274Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.715374848Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=443.854µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.715880126Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.716263287Z level=info msg="Migration successfully executed" id="Drop public config table" duration=383.18µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.716751114Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.71722224Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=471.115µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.717735795Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.718172686Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=436.8µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.718646146Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.71909506Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=449.004µs 2026-03-09T18:29:30.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.71958474Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.720022092Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=437.242µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.720697351Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.728734247Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.033449ms 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.729620984Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.732689431Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=3.067075ms 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.733427858Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.736435481Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=3.006411ms 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.737190751Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.737389544Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=199.685µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.738092385Z level=info msg="Executing migration" id="add share column" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.74067508Z level=info msg="Migration successfully executed" id="add share column" duration=2.582294ms 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.741269376Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.741402957Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=133.551µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.741963962Z level=info msg="Executing migration" id="create file table" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.742395022Z level=info msg="Migration successfully executed" id="create file table" duration=431.18µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.74302158Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.743597893Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=575.942µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.744190977Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.744685687Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=493.197µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.745249697Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.745643859Z level=info msg="Migration successfully executed" id="create file_meta table" duration=394.181µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.746210634Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.746673093Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=462.429µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.747311974Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.747361577Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=50.114µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.747829777Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.74787882Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=49.433µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.7483635Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.748626665Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=263.043µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.749091327Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.749208488Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=117.381µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.749601277Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.750222344Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=620.967µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.750716322Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.754832449Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=4.11228ms 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.7555832Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.755751897Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=168.646µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.75633916Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.757100532Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=761.322µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.758295759Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.758575854Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=279.335µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.759184157Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.759351743Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=167.506µs 2026-03-09T18:29:30.945 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.759939377Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.760613895Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=674.447µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.761206489Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.763956939Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.75014ms 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.764767683Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.76867707Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.90559ms 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.769458419Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.770165597Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=707.65µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.770744104Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.798645747Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=27.899468ms 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.799404152Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.800039076Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=634.965µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.80060534Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.801173568Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=568.017µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.801880226Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.810753102Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.872536ms 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.8204028Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.823050486Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.647636ms 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.823651886Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.823837305Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=185.609µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.82441464Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.824583367Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=168.817µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.82532988Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.825731706Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=402.787µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.826475172Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.826677824Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=203.002µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.827345479Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.827562476Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=216.046µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.828209222Z level=info msg="Executing migration" id="create folder table" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.828860716Z level=info msg="Migration successfully executed" id="create folder table" duration=649.089µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.829470443Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.830214681Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=743.608µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.831196196Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.831956405Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=759.668µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.832740287Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.832759023Z level=info msg="Migration successfully executed" id="Update folder title length" duration=19.366µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.833458518Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.834195563Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=736.714µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.835012488Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.83579028Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=780.407µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.836416328Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.8370299Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=613.512µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.837746297Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.838028848Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=280.757µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.838465399Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.838644726Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=179.549µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.839216701Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.839828Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=611.028µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.840397039Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.841069282Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=671.854µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.841626721Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.842265722Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=638.55µs 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.842836775Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T18:29:30.946 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.843386548Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=549.702µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.843854237Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.844287521Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=433.183µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.844740943Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.845122912Z level=info msg="Migration successfully executed" id="create anon_device table" duration=382.119µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.845636386Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.846223441Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=586.994µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.846865176Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.847411543Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=546.497µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.848024866Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.848492105Z level=info msg="Migration successfully executed" id="create signing_key table" duration=467.148µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.84915505Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.849648047Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=494.339µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.850270536Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.850744878Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=474.342µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.851269915Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.851416621Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=147.086µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.851966003Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.854806642Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.840309ms 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.855337821Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.855730689Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=393.248µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.856239575Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.856696775Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=457.149µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.857286373Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.857740536Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=454.385µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.858215239Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.858665605Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=450.426µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.859187086Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.859667529Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=480.132µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.86015807Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.860632172Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=474.082µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.86113659Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.861586645Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=449.715µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.862231277Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.862628313Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=397.335µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.863125769Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.863262586Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=137.247µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.86389836Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.863946111Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=47.891µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.864421023Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.867059992Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.638499ms 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.867556657Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.870094627Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.537869ms 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.870612931Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.870845899Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=233.569µs 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=migrator t=2026-03-09T18:29:30.871424576Z level=info msg="migrations completed" performed=547 skipped=0 duration=768.20541ms 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=sqlstore t=2026-03-09T18:29:30.872142445Z level=info msg="Created default organization" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=secrets t=2026-03-09T18:29:30.872788219Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=plugin.store t=2026-03-09T18:29:30.881118887Z level=info msg="Loading plugins..." 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=local.finder t=2026-03-09T18:29:30.923438097Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=plugin.store t=2026-03-09T18:29:30.923534589Z level=info msg="Plugins loaded" count=55 duration=42.416013ms 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=query_data t=2026-03-09T18:29:30.925087316Z level=info msg="Query Service initialization" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=live.push_http t=2026-03-09T18:29:30.927145195Z level=info msg="Live Push Gateway initialization" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.migration t=2026-03-09T18:29:30.929592223Z level=info msg=Starting 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.migration t=2026-03-09T18:29:30.929861229Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.migration orgID=1 t=2026-03-09T18:29:30.930100479Z level=info msg="Migrating alerts for organisation" 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.migration orgID=1 t=2026-03-09T18:29:30.930454354Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T18:29:30.947 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.migration t=2026-03-09T18:29:30.931381156Z level=info msg="Completed alerting migration" 2026-03-09T18:29:30.967 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:30 vm04 bash[85761]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.state.manager t=2026-03-09T18:29:30.938663052Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=infra.usagestats.collector t=2026-03-09T18:29:30.939761267Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.datasources t=2026-03-09T18:29:30.940963265Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.datasources t=2026-03-09T18:29:30.946042201Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.alerting t=2026-03-09T18:29:30.95157982Z level=info msg="starting to provision alerting" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.alerting t=2026-03-09T18:29:30.951652346Z level=info msg="finished to provision alerting" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=grafanaStorageLogger t=2026-03-09T18:29:30.952036568Z level=info msg="Storage starting" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=http.server t=2026-03-09T18:29:30.953044693Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=http.server t=2026-03-09T18:29:30.953398718Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.state.manager t=2026-03-09T18:29:30.953494318Z level=info msg="Warming state cache for startup" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.state.manager t=2026-03-09T18:29:30.953714462Z level=info msg="State cache has been initialized" states=0 duration=219.782µs 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.dashboard t=2026-03-09T18:29:30.954526828Z level=info msg="starting to provision dashboards" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=sqlstore.transactions t=2026-03-09T18:29:30.967983581Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.multiorg.alertmanager t=2026-03-09T18:29:30.969539135Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ngalert.scheduler t=2026-03-09T18:29:30.969628883Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ticker t=2026-03-09T18:29:30.969687333Z level=info msg=starting first_tick=2026-03-09T18:29:40Z 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=plugins.update.checker t=2026-03-09T18:29:31.027647401Z level=info msg="Update check succeeded" duration=59.315426ms 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=provisioning.dashboard t=2026-03-09T18:29:31.09952693Z level=info msg="finished to provision dashboards" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=grafana-apiserver t=2026-03-09T18:29:31.185178093Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:29:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=grafana-apiserver t=2026-03-09T18:29:31.185876295Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:31.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:32.384 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:32 vm04 ceph-mon[57581]: Deploying daemon node-exporter.a on vm04 2026-03-09T18:29:32.384 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Getting image source signatures 2026-03-09T18:29:32.384 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-09T18:29:32.384 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-09T18:29:32.384 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T18:29:32.384 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-mon[51427]: Deploying daemon node-exporter.a on vm04 2026-03-09T18:29:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:32 vm09 ceph-mon[54744]: Deploying daemon node-exporter.a on vm04 2026-03-09T18:29:32.717 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:32.384Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003343614s 2026-03-09T18:29:33.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[51427]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:33.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[51427]: Deploying daemon node-exporter.b on vm09 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: Writing manifest to image destination 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 podman[85761]: 2026-03-09 18:29:32.914782857 +0000 UTC m=+2.093086519 container create 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 podman[85761]: 2026-03-09 18:29:32.94500749 +0000 UTC m=+2.123311152 container init 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 podman[85761]: 2026-03-09 18:29:32.908674755 +0000 UTC m=+2.086978417 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 podman[85761]: 2026-03-09 18:29:32.947938967 +0000 UTC m=+2.126242629 container start 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 bash[85761]: 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.951Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.951Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T18:29:33.218 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.952Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.952Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a[85817]: ts=2026-03-09T18:29:32.953Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T18:29:33.219 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:29:32 vm04 systemd[1]: Started Ceph node-exporter.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:33.464 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:33 vm09 ceph-mon[54744]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:33.464 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:33 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.464 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:33 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.464 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:33 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.464 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:33 vm09 ceph-mon[54744]: Deploying daemon node-exporter.b on vm09 2026-03-09T18:29:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[57581]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:33 vm04 ceph-mon[57581]: Deploying daemon node-exporter.b on vm09 2026-03-09T18:29:33.764 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:33 vm09 systemd[1]: Starting Ceph node-exporter.b for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:34.109 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:33 vm09 bash[80425]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-09T18:29:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:34 vm09 ceph-mon[54744]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:34 vm04 ceph-mon[57581]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:34 vm04 ceph-mon[51427]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:35.358 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Getting image source signatures 2026-03-09T18:29:35.358 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-09T18:29:35.358 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-09T18:29:35.358 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: Writing manifest to image destination 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 podman[80425]: 2026-03-09 18:29:35.821241778 +0000 UTC m=+2.073719714 container create 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 podman[80425]: 2026-03-09 18:29:35.847931171 +0000 UTC m=+2.100409098 container init 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 podman[80425]: 2026-03-09 18:29:35.850419969 +0000 UTC m=+2.102897916 container start 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 bash[80425]: 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 podman[80425]: 2026-03-09 18:29:35.813659296 +0000 UTC m=+2.066137243 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.855Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.855Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.857Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.857Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.857Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 systemd[1]: Started Ceph node-exporter.b for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.857Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T18:29:36.111 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.861Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.862Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T18:29:36.112 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:29:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b[80479]: ts=2026-03-09T18:29:35.862Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T18:29:36.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:36 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:37.334 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:37.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:36 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 systemd[1]: Stopping Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:37 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:38.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:38.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:37 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[85551]: ts=2026-03-09T18:29:38.292Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86310]: 2026-03-09 18:29:38.308488301 +0000 UTC m=+0.036172006 container died 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86310]: 2026-03-09 18:29:38.338649385 +0000 UTC m=+0.066333100 container remove 40e958a1dff89c70ceca3f0017705f42c3dae0e74c7126c889a6916d936b5957 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86310]: 2026-03-09 18:29:38.339770132 +0000 UTC m=+0.067453847 volume remove 73afc499601b2a82e488f4e550b59aaacbd11b6ecce37fa5fbb1acda3900219e 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 bash[86310]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 systemd[1]: Stopped Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:38.580 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 systemd[1]: Starting Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86380]: 2026-03-09 18:29:38.580497351 +0000 UTC m=+0.051816508 volume create 236079768cac12b1a32a4a820dea2e48e9736454d8f6efd0085eaaf31cd2c9b7 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86380]: 2026-03-09 18:29:38.611089532 +0000 UTC m=+0.082408679 container create 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86380]: 2026-03-09 18:29:38.543866708 +0000 UTC m=+0.015185876 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86380]: 2026-03-09 18:29:38.650821129 +0000 UTC m=+0.122140296 container init 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 podman[86380]: 2026-03-09 18:29:38.655326803 +0000 UTC m=+0.126645950 container start 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 bash[86380]: 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 systemd[1]: Started Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.680Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.680Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.681Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.104 port=9094 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.682Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.727Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.728Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.729Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T18:29:38.905 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:38.729Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[51427]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[51427]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[51427]: Reconfiguring daemon alertmanager.a on vm04 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:29:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[57581]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[57581]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[57581]: Reconfiguring daemon alertmanager.a on vm04 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:39 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:39 vm09 ceph-mon[54744]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:39 vm09 ceph-mon[54744]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:39 vm09 ceph-mon[54744]: Reconfiguring daemon alertmanager.a on vm04 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:39 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.217 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:39 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:39.530 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 systemd[1]: Stopping Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:39.530 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.429Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:29:39.530 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.430Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.431Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.431Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[78698]: ts=2026-03-09T18:29:39.431Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81038]: 2026-03-09 18:29:39.441708806 +0000 UTC m=+0.029262190 container died 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81038]: 2026-03-09 18:29:39.461893553 +0000 UTC m=+0.049446937 container remove 6e27083d9b43d3e8083f92800ed516ea53b1c16594f15013f02665da66f371c3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:39.531 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 bash[81038]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@prometheus.a.service: Deactivated successfully. 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 systemd[1]: Stopped Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 systemd[1]: Starting Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81106]: 2026-03-09 18:29:39.658750361 +0000 UTC m=+0.023907426 container create 2eb808e4f6f1041a7990af8ced13b19ae26067abd58b5cc1562fd6b4e06b6118 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81106]: 2026-03-09 18:29:39.688239046 +0000 UTC m=+0.053396111 container init 2eb808e4f6f1041a7990af8ced13b19ae26067abd58b5cc1562fd6b4e06b6118 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81106]: 2026-03-09 18:29:39.690992681 +0000 UTC m=+0.056149746 container start 2eb808e4f6f1041a7990af8ced13b19ae26067abd58b5cc1562fd6b4e06b6118 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 bash[81106]: 2eb808e4f6f1041a7990af8ced13b19ae26067abd58b5cc1562fd6b4e06b6118 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 podman[81106]: 2026-03-09 18:29:39.64890168 +0000 UTC m=+0.014058756 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 systemd[1]: Started Ceph prometheus.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.722Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.722Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.722Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm09 (none))" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.722Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.722Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.724Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.725Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.728Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.728Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.729Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.730Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=23.625µs 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.730Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.730Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T18:29:39.859 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.730Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.731Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=37.231µs wal_replay_duration=671.362µs wbl_replay_duration=140ns total_replay_duration=1.052149ms 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.734Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.734Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.734Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.751Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.145625ms db_storage=1.272µs remote_storage=1.313µs web_handler=310ns query_engine=721ns scrape=697.591µs scrape_sd=86.523µs notify=8.186µs notify_sd=6.883µs rules=14.966119ms tracing=8.024µs 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.751Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:29:39.860 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:29:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:29:39.751Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.026 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:39] ENGINE Bus STOPPING 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:39] ENGINE Bus STOPPED 2026-03-09T18:29:40.027 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:39 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:39] ENGINE Bus STARTING 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.328 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:40 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Serving on http://:::9283 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STARTED 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STOPPING 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STOPPED 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STARTING 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Serving on http://:::9283 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STARTED 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STOPPING 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STOPPED 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STARTING 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Serving on http://:::9283 2026-03-09T18:29:40.329 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: [09/Mar/2026:18:29:40] ENGINE Bus STARTED 2026-03-09T18:29:40.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:40.359 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:40 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:40.683 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:40.683Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000597711s 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:41 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:41 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:29:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:42 vm09 ceph-mon[54744]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:42 vm04 ceph-mon[51427]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:42.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:42 vm04 ceph-mon[57581]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:44 vm09 ceph-mon[54744]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:44 vm04 ceph-mon[51427]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:44 vm04 ceph-mon[57581]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:46.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:29:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:46 vm09 ceph-mon[54744]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:46.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:46 vm04 ceph-mon[51427]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:46 vm04 ceph-mon[57581]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:48.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:48.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:48.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:48 vm04 ceph-mon[57581]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:29:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:29:48.967 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:29:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:29:48.686Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003978342s 2026-03-09T18:29:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:48 vm04 ceph-mon[51427]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:48 vm09 ceph-mon[54744]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:50 vm09 ceph-mon[54744]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:51.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:50 vm04 ceph-mon[57581]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:50 vm04 ceph-mon[51427]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:52.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:52 vm04 ceph-mon[51427]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:52.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:52 vm04 ceph-mon[57581]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:52.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:52 vm09 ceph-mon[54744]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:54.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:54 vm09 ceph-mon[54744]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:54.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:54 vm04 ceph-mon[57581]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:54.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:54 vm04 ceph-mon[51427]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:56.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:29:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:29:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:56 vm09 ceph-mon[54744]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:56 vm04 ceph-mon[57581]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:56 vm04 ceph-mon[51427]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:57.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:57.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:57.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:58.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:29:58 vm09 ceph-mon[54744]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:58.905 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:29:58 vm04 ceph-mon[51427]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:58.905 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:29:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:29:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:29:58.905 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:29:58 vm04 ceph-mon[57581]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:00 vm09 ceph-mon[54744]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:00.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:00 vm09 ceph-mon[54744]: overall HEALTH_OK 2026-03-09T18:30:00.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:00 vm04 ceph-mon[51427]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:00.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:00 vm04 ceph-mon[51427]: overall HEALTH_OK 2026-03-09T18:30:00.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:00 vm04 ceph-mon[57581]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:00.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:00 vm04 ceph-mon[57581]: overall HEALTH_OK 2026-03-09T18:30:01.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:01 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:01.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:01 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:01.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:01 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:02.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:02 vm09 ceph-mon[54744]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:02.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:02 vm04 ceph-mon[51427]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:02.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:02 vm04 ceph-mon[57581]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:04.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:04 vm09 ceph-mon[54744]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:04.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:04 vm04 ceph-mon[51427]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:04.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:04 vm04 ceph-mon[57581]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:06.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:06.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:06 vm09 ceph-mon[54744]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:06.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:06 vm04 ceph-mon[51427]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:06.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:06 vm04 ceph-mon[57581]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:07.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:07 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:07.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:07 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:07.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:07 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:08.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:08 vm09 ceph-mon[54744]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:08.904 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:08.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:08 vm04 ceph-mon[57581]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:08.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:08 vm04 ceph-mon[51427]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:10.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:10 vm09 ceph-mon[54744]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:10.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:10 vm04 ceph-mon[51427]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:10.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:10 vm04 ceph-mon[57581]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:12 vm09 ceph-mon[54744]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:12 vm04 ceph-mon[51427]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:12 vm04 ceph-mon[57581]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:14.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:14 vm09 ceph-mon[54744]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:14.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:14 vm04 ceph-mon[51427]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:14.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:14 vm04 ceph-mon[57581]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:15 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]: dispatch 2026-03-09T18:30:15.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:15 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:16.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:16.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:16 vm09 ceph-mon[54744]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:16.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-09T18:30:16.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]': finished 2026-03-09T18:30:16.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:16 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]': finished 2026-03-09T18:30:16.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:16 vm09 ceph-mon[54744]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:30:16.969 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[51427]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[51427]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[57581]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.17", "id": [1, 2]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1f", "id": [1, 2]}]': finished 2026-03-09T18:30:16.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:16 vm04 ceph-mon[57581]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:30:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:17.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:17 vm09 ceph-mon[54744]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:30:17.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:17.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:17 vm04 ceph-mon[51427]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:30:17.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:17.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:17 vm04 ceph-mon[57581]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:30:18.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:18 vm04 ceph-mon[51427]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:18.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:18 vm04 ceph-mon[51427]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T18:30:18.904 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:18.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:18 vm04 ceph-mon[57581]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:18.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:18 vm04 ceph-mon[57581]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T18:30:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:18 vm09 ceph-mon[54744]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:19.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:18 vm09 ceph-mon[54744]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T18:30:20.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:20 vm04 ceph-mon[57581]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:20.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:20 vm04 ceph-mon[51427]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:21.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:20 vm09 ceph-mon[54744]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:23.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:22 vm09 ceph-mon[54744]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:30:23.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:22 vm04 ceph-mon[51427]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:30:23.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:22 vm04 ceph-mon[57581]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: git switch -c 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:Or undo this operation with: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: git switch - 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-09T18:30:23.687 INFO:tasks.workunit.client.0.vm04.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T18:30:23.693 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T18:30:23.748 INFO:tasks.workunit.client.0.vm04.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T18:30:23.749 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T18:30:23.749 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T18:30:23.796 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T18:30:23.831 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T18:30:23.862 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T18:30:23.865 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T18:30:23.865 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T18:30:23.895 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T18:30:23.898 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T18:30:23.898 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T18:30:23.954 INFO:tasks.workunit:Running workunits matching rados/test_python.sh on client.0... 2026-03-09T18:30:23.954 INFO:tasks.workunit:Running workunit rados/test_python.sh... 2026-03-09T18:30:23.954 DEBUG:teuthology.orchestra.run.vm04:workunit test rados/test_python.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T18:30:24.014 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool create rbd 2026-03-09T18:30:24.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:23 vm04 ceph-mon[57581]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T18:30:24.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:23 vm04 ceph-mon[57581]: Cluster is now healthy 2026-03-09T18:30:24.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:23 vm04 ceph-mon[51427]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T18:30:24.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:23 vm04 ceph-mon[51427]: Cluster is now healthy 2026-03-09T18:30:24.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:23 vm09 ceph-mon[54744]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T18:30:24.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:23 vm09 ceph-mon[54744]: Cluster is now healthy 2026-03-09T18:30:24.977 INFO:tasks.workunit.client.0.vm04.stderr:pool 'rbd' already exists 2026-03-09T18:30:24.987 INFO:tasks.workunit.client.0.vm04.stderr:++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T18:30:24.987 INFO:tasks.workunit.client.0.vm04.stderr:+ python3 -m pytest -v /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/../../../src/test/pybind/test_rados.py 2026-03-09T18:30:25.068 INFO:tasks.workunit.client.0.vm04.stdout:============================= test session starts ============================== 2026-03-09T18:30:25.068 INFO:tasks.workunit.client.0.vm04.stdout:platform linux -- Python 3.9.25, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3 2026-03-09T18:30:25.069 INFO:tasks.workunit.client.0.vm04.stdout:cachedir: .pytest_cache 2026-03-09T18:30:25.069 INFO:tasks.workunit.client.0.vm04.stdout:rootdir: /home/ubuntu/cephtest/clone.client.0/src/test/pybind, configfile: pytest.ini 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[57581]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 13 B/s, 1 objects/s recovering 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[51427]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 13 B/s, 1 objects/s recovering 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:25.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:24 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:25.225 INFO:tasks.workunit.client.0.vm04.stdout:collecting ... collected 91 items 2026-03-09T18:30:25.225 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:30:25.230 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init_error PASSED [ 1%] 2026-03-09T18:30:25.263 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init PASSED [ 2%] 2026-03-09T18:30:25.274 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_ioctx_context_manager PASSED [ 3%] 2026-03-09T18:30:25.280 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv PASSED [ 4%] 2026-03-09T18:30:25.283 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv_empty_str PASSED [ 5%] 2026-03-09T18:30:25.288 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_configuring PASSED [ 6%] 2026-03-09T18:30:25.298 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_connected PASSED [ 7%] 2026-03-09T18:30:25.310 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_shutdown PASSED [ 8%] 2026-03-09T18:30:25.328 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_ping_monitor PASSED [ 9%] 2026-03-09T18:30:25.343 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_annotations PASSED [ 10%] 2026-03-09T18:30:25.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:24 vm09 ceph-mon[54744]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 13 B/s, 1 objects/s recovering 2026-03-09T18:30:25.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:24 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:25.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:24 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1734400349' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[57581]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1734400349' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:30:26.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:25 vm04 ceph-mon[51427]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1656672003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1734400349' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:30:26.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:25 vm09 ceph-mon[54744]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:30:26.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:26.935 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create PASSED [ 12%] 2026-03-09T18:30:27.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:26 vm09 ceph-mon[54744]: pgmap v41: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:30:27.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:26 vm09 ceph-mon[54744]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:27.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:26 vm09 ceph-mon[54744]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[57581]: pgmap v41: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[57581]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[57581]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[51427]: pgmap v41: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[51427]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:26 vm04 ceph-mon[51427]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:30:28.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:28.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:27 vm09 ceph-mon[54744]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:30:28.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:27 vm04 ceph-mon[57581]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:30:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:27 vm04 ceph-mon[51427]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:30:28.943 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create_utf8 PASSED [ 13%] 2026-03-09T18:30:29.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:28 vm04 ceph-mon[57581]: pgmap v44: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 18 B/s, 2 objects/s recovering 2026-03-09T18:30:29.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:28 vm04 ceph-mon[57581]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:30:29.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:28 vm04 ceph-mon[51427]: pgmap v44: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 18 B/s, 2 objects/s recovering 2026-03-09T18:30:29.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:28 vm04 ceph-mon[51427]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:30:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:28 vm09 ceph-mon[54744]: pgmap v44: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 18 B/s, 2 objects/s recovering 2026-03-09T18:30:29.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:28 vm09 ceph-mon[54744]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:30:30.949 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_pool_lookup_utf8 PASSED [ 14%] 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[57581]: pgmap v47: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[57581]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[51427]: pgmap v47: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[51427]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:31.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:30 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:30 vm09 ceph-mon[54744]: pgmap v47: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:30 vm09 ceph-mon[54744]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:30:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:30 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:31.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:30 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:32.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:31 vm04 ceph-mon[57581]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:30:32.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:31 vm04 ceph-mon[51427]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:30:32.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:31 vm09 ceph-mon[54744]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:30:32.990 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_eexist PASSED [ 15%] 2026-03-09T18:30:33.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:32 vm09 ceph-mon[54744]: pgmap v50: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:33.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:32 vm09 ceph-mon[54744]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:30:33.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:32 vm04 ceph-mon[57581]: pgmap v50: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:33.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:32 vm04 ceph-mon[57581]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:30:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:32 vm04 ceph-mon[51427]: pgmap v50: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:33.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:32 vm04 ceph-mon[51427]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:30:34.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:34 vm09 ceph-mon[54744]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:30:34.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:34 vm04 ceph-mon[57581]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:30:34.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:34 vm04 ceph-mon[51427]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:30:35.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:35 vm09 ceph-mon[54744]: pgmap v53: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:35.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:35 vm09 ceph-mon[54744]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:30:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:35 vm04 ceph-mon[57581]: pgmap v53: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:35.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:35 vm04 ceph-mon[57581]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:30:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:35 vm04 ceph-mon[51427]: pgmap v53: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:35.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:35 vm04 ceph-mon[51427]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:30:36.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:36 vm09 ceph-mon[54744]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:30:36.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:36 vm09 ceph-mon[54744]: pgmap v56: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:36.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:36 vm04 ceph-mon[57581]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:30:36.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:36 vm04 ceph-mon[57581]: pgmap v56: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:36 vm04 ceph-mon[51427]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:30:36.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:36 vm04 ceph-mon[51427]: pgmap v56: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:36.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:37.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:37 vm09 ceph-mon[54744]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:37.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:37 vm09 ceph-mon[54744]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:30:37.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[57581]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[57581]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[51427]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[51427]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:30:37.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:38.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:38 vm09 ceph-mon[54744]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:30:38.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:38 vm09 ceph-mon[54744]: pgmap v59: 228 pgs: 32 creating+peering, 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:38.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:38 vm04 ceph-mon[57581]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:30:38.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:38 vm04 ceph-mon[57581]: pgmap v59: 228 pgs: 32 creating+peering, 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:38 vm04 ceph-mon[51427]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:30:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:38 vm04 ceph-mon[51427]: pgmap v59: 228 pgs: 32 creating+peering, 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:39.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:39 vm09 ceph-mon[54744]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:30:39.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:39 vm09 ceph-mon[54744]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:30:39.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:39 vm04 ceph-mon[57581]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:30:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:39 vm04 ceph-mon[57581]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:30:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:39 vm04 ceph-mon[51427]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:30:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:39 vm04 ceph-mon[51427]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:30:40.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:40 vm09 ceph-mon[54744]: pgmap v62: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:40.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:40 vm04 ceph-mon[57581]: pgmap v62: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:40.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:40 vm04 ceph-mon[51427]: pgmap v62: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:41.289 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_list_pools PASSED [ 16%] 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[57581]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[57581]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[51427]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:41.567 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:41 vm04 ceph-mon[51427]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:30:41.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:41 vm09 ceph-mon[54744]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:30:41.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:41 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:41.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:41 vm09 ceph-mon[54744]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:30:42.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:42 vm09 ceph-mon[54744]: pgmap v65: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:42.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:42 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:42.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:42 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[57581]: pgmap v65: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[51427]: pgmap v65: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:42 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.354 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:30:42 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=infra.usagestats t=2026-03-09T18:30:42.956099554Z level=info msg="Usage stats are ready to report" 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:43 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:43.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:30:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:43 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: pgmap v68: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:44.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:44 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:44.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: pgmap v68: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: pgmap v68: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:44 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:30:46.608 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: pgmap v71: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:46.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:46.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: pgmap v71: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3139687059' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: pgmap v71: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:46.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:47 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T18:30:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:47 vm09 ceph-mon[54744]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:30:47.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:47.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T18:30:47.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[57581]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:30:47.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:47.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T18:30:47.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[51427]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:30:47.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:48.358 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_pool_base_tier PASSED [ 17%] 2026-03-09T18:30:48.375 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_fsid PASSED [ 18%] 2026-03-09T18:30:48.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[57581]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:30:48.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[57581]: pgmap v74: 196 pgs: 196 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:48.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:48.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[51427]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:30:48.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[51427]: pgmap v74: 196 pgs: 196 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:48.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:48 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:48 vm09 ceph-mon[54744]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:30:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:48 vm09 ceph-mon[54744]: pgmap v74: 196 pgs: 196 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:48 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:49.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:49.388 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_blocklist_add PASSED [ 19%] 2026-03-09T18:30:49.400 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_cluster_stats PASSED [ 20%] 2026-03-09T18:30:49.412 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_monitor_log PASSED [ 21%] 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[57581]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3390281230' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[51427]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3390281230' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:49.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:49 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:49 vm09 ceph-mon[54744]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:30:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:49 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3390281230' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:49 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T18:30:50.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T18:30:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[57581]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:30:50.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[57581]: pgmap v77: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T18:30:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[51427]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:30:50.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:50 vm04 ceph-mon[51427]: pgmap v77: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:50 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T18:30:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:50 vm09 ceph-mon[54744]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:30:50.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:50 vm09 ceph-mon[54744]: pgmap v77: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[57581]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3086962345' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[51427]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3086962345' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:51.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:51 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:51 vm09 ceph-mon[54744]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:30:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:51 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3086962345' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:51.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:51 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:52.400 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_last_version PASSED [ 23%] 2026-03-09T18:30:52.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[57581]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[57581]: pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[57581]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[51427]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[51427]: pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:52.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:52 vm04 ceph-mon[51427]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:30:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:52 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:52 vm09 ceph-mon[54744]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:30:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:52 vm09 ceph-mon[54744]: pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:30:52.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:52 vm09 ceph-mon[54744]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:30:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:53 vm09 ceph-mon[54744]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:30:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:53 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1352434024' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:53 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[57581]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1352434024' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[51427]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1352434024' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:53 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:54.756 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_stats PASSED [ 24%] 2026-03-09T18:30:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:54 vm09 ceph-mon[54744]: pgmap v83: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:54 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:54 vm09 ceph-mon[54744]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:30:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:54 vm09 ceph-mon[54744]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:30:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[57581]: pgmap v83: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[57581]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[57581]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[51427]: pgmap v83: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[51427]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:30:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:54 vm04 ceph-mon[51427]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:30:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:55 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:55 vm09 ceph-mon[54744]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:30:56.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:55 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:56.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:55 vm04 ceph-mon[57581]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:30:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:55 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:30:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:55 vm04 ceph-mon[51427]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:30:56.818 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:30:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:30:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:56 vm09 ceph-mon[54744]: pgmap v86: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:56 vm09 ceph-mon[54744]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:30:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:56 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:57.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[57581]: pgmap v86: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:57.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[57581]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:30:57.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[51427]: pgmap v86: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[51427]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:30:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:56 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:30:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:57 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:57 vm09 ceph-mon[54744]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:30:58.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[57581]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:30:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/841891777' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:30:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:57 vm04 ceph-mon[51427]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:30:58.787 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write PASSED [ 25%] 2026-03-09T18:30:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:58 vm09 ceph-mon[54744]: pgmap v89: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:30:58 vm09 ceph-mon[54744]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:30:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:58 vm04 ceph-mon[57581]: pgmap v89: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:30:58 vm04 ceph-mon[57581]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:30:59.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:30:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:30:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:30:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:58 vm04 ceph-mon[51427]: pgmap v89: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:30:58 vm04 ceph-mon[51427]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:31:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:00 vm09 ceph-mon[54744]: pgmap v92: 164 pgs: 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:00 vm09 ceph-mon[54744]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:31:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:00 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[57581]: pgmap v92: 164 pgs: 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[57581]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:31:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[51427]: pgmap v92: 164 pgs: 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[51427]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:31:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:00 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:01 vm09 ceph-mon[54744]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:31:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:01 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4156488502' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:01 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:01 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:01 vm09 ceph-mon[54744]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:31:02.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[57581]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:31:02.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4156488502' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[57581]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[51427]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4156488502' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:01 vm04 ceph-mon[51427]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:31:02.812 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_full PASSED [ 26%] 2026-03-09T18:31:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:02 vm09 ceph-mon[54744]: pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:02 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:02 vm09 ceph-mon[54744]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:31:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[57581]: pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[57581]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:31:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[51427]: pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 232 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:02 vm04 ceph-mon[51427]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:31:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:04 vm04 ceph-mon[57581]: pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:04 vm04 ceph-mon[57581]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:31:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:04 vm04 ceph-mon[51427]: pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:04 vm04 ceph-mon[51427]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:31:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:04 vm09 ceph-mon[54744]: pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:04 vm09 ceph-mon[54744]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:31:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[57581]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:31:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1819495968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[57581]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:31:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[51427]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:31:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1819495968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:05 vm04 ceph-mon[51427]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:31:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:05 vm09 ceph-mon[54744]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:31:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:05 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1819495968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:05 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:05 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:05 vm09 ceph-mon[54744]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:31:06.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:06.897 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame PASSED [ 27%] 2026-03-09T18:31:07.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:06 vm04 ceph-mon[57581]: pgmap v101: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:07.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:06 vm04 ceph-mon[57581]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:31:07.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:06 vm04 ceph-mon[51427]: pgmap v101: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:07.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:06 vm04 ceph-mon[51427]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:31:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:06 vm09 ceph-mon[54744]: pgmap v101: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 272 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:06 vm09 ceph-mon[54744]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:31:08.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:07 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:07 vm04 ceph-mon[57581]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T18:31:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:07 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:07 vm04 ceph-mon[51427]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T18:31:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:07 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:07 vm09 ceph-mon[54744]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T18:31:09.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[57581]: pgmap v104: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[57581]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[51427]: pgmap v104: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:08 vm04 ceph-mon[51427]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T18:31:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:08 vm09 ceph-mon[54744]: pgmap v104: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:08 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:08 vm09 ceph-mon[54744]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T18:31:10.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3468384751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[57581]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3468384751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:09 vm04 ceph-mon[51427]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T18:31:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3468384751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:09 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:09 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:09 vm09 ceph-mon[54744]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T18:31:11.012 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_append PASSED [ 28%] 2026-03-09T18:31:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:11 vm09 ceph-mon[54744]: pgmap v107: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:11.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:11 vm04 ceph-mon[57581]: pgmap v107: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:11 vm04 ceph-mon[51427]: pgmap v107: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:12 vm09 ceph-mon[54744]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:31:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:12 vm09 ceph-mon[54744]: pgmap v110: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:12 vm09 ceph-mon[54744]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:31:12.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[57581]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:31:12.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[57581]: pgmap v110: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[57581]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:31:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[51427]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:31:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[51427]: pgmap v110: 164 pgs: 164 active+clean; 455 KiB data, 280 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:12 vm04 ceph-mon[51427]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:31:14.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:14 vm09 ceph-mon[54744]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T18:31:14.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:14 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:14.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:14 vm04 ceph-mon[57581]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T18:31:14.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:14 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:14.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:14 vm04 ceph-mon[51427]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T18:31:14.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:14 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:15.026 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_zeros PASSED [ 29%] 2026-03-09T18:31:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:15 vm09 ceph-mon[54744]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:15 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:15 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:15 vm09 ceph-mon[54744]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T18:31:15.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[57581]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:15.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[57581]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[51427]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3047283483' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:15 vm04 ceph-mon[51427]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T18:31:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:16 vm09 ceph-mon[54744]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T18:31:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:16 vm09 ceph-mon[54744]: pgmap v116: 164 pgs: 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:16.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[57581]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T18:31:16.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[57581]: pgmap v116: 164 pgs: 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:16.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[51427]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T18:31:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[51427]: pgmap v116: 164 pgs: 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:16.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:17.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:17 vm09 ceph-mon[54744]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:31:17.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:17.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:17 vm04 ceph-mon[57581]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:31:17.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:17.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:17 vm04 ceph-mon[51427]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:31:17.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1549042398' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: pgmap v119: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:18 vm09 ceph-mon[54744]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T18:31:18.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:31:18.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1549042398' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: pgmap v119: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:18.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[57581]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1549042398' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: pgmap v119: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:18 vm04 ceph-mon[51427]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T18:31:19.060 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_trunc PASSED [ 30%] 2026-03-09T18:31:19.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:20.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:20 vm09 ceph-mon[54744]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T18:31:20.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:20 vm09 ceph-mon[54744]: pgmap v122: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:20.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:20 vm04 ceph-mon[57581]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T18:31:20.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:20 vm04 ceph-mon[57581]: pgmap v122: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:20.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:20 vm04 ceph-mon[51427]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T18:31:20.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:20 vm04 ceph-mon[51427]: pgmap v122: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:21.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:21 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:21.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:21 vm09 ceph-mon[54744]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T18:31:21.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:21 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:21.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:21 vm04 ceph-mon[57581]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T18:31:21.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:21 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:21.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:21 vm04 ceph-mon[51427]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T18:31:22.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:22 vm09 ceph-mon[54744]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:31:22.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:22 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3381889738' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:22 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:22 vm09 ceph-mon[54744]: pgmap v125: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:22.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[57581]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:31:22.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3381889738' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[57581]: pgmap v125: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[51427]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:31:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3381889738' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:22 vm04 ceph-mon[51427]: pgmap v125: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:23.089 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext PASSED [ 31%] 2026-03-09T18:31:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:23 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:23 vm09 ceph-mon[54744]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:31:23.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:23 vm09 ceph-mon[54744]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T18:31:23.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:23.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[57581]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:31:23.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[57581]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T18:31:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[51427]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:31:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:23 vm04 ceph-mon[51427]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T18:31:24.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:24 vm09 ceph-mon[54744]: pgmap v128: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:24.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:24 vm04 ceph-mon[57581]: pgmap v128: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:24.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:24 vm04 ceph-mon[51427]: pgmap v128: 164 pgs: 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:25.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:25 vm04 ceph-mon[57581]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T18:31:25.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:25 vm04 ceph-mon[51427]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T18:31:25.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:25 vm09 ceph-mon[54744]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T18:31:26.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[57581]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T18:31:26.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:26.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[57581]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:26.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[51427]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T18:31:26.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:26.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:26 vm04 ceph-mon[51427]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:26.569 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:26 vm09 ceph-mon[54744]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T18:31:26.569 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:26 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:26.569 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:26 vm09 ceph-mon[54744]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:26.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:27.136 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects_empty PASSED [ 32%] 2026-03-09T18:31:27.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:27.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:27.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[57581]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:31:27.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[51427]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:31:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:27 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3761928551' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:27 vm09 ceph-mon[54744]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:31:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:28.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:28 vm04 ceph-mon[57581]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:31:28.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:28 vm04 ceph-mon[57581]: pgmap v134: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:28 vm04 ceph-mon[51427]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:31:28.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:28 vm04 ceph-mon[51427]: pgmap v134: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:28 vm09 ceph-mon[54744]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:31:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:28 vm09 ceph-mon[54744]: pgmap v134: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:29.172 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:29.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:29 vm04 ceph-mon[57581]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T18:31:29.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:29 vm04 ceph-mon[51427]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T18:31:29.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:29 vm09 ceph-mon[54744]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T18:31:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:30 vm09 ceph-mon[54744]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T18:31:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:30 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3598184436' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:30 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:30 vm09 ceph-mon[54744]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:30.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[57581]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3598184436' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[57581]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[51427]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3598184436' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:30 vm04 ceph-mon[51427]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:31.216 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_read_crc PASSED [ 34%] 2026-03-09T18:31:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:31 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:31 vm09 ceph-mon[54744]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T18:31:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:31 vm09 ceph-mon[54744]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T18:31:31.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[57581]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[57581]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[51427]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:31 vm04 ceph-mon[51427]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T18:31:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:32 vm09 ceph-mon[54744]: pgmap v140: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:32.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:32 vm04 ceph-mon[57581]: pgmap v140: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:32 vm04 ceph-mon[51427]: pgmap v140: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:33 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:33.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:33 vm09 ceph-mon[54744]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T18:31:33.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:33 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:33.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:33 vm04 ceph-mon[57581]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T18:31:33.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:33 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:33.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:33 vm04 ceph-mon[51427]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3271254208' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:34 vm09 ceph-mon[54744]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3271254208' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[57581]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3271254208' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:34 vm04 ceph-mon[51427]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:31:35.305 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects PASSED [ 35%] 2026-03-09T18:31:36.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:36 vm04 ceph-mon[57581]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:31:36.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:36 vm04 ceph-mon[57581]: pgmap v146: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:36 vm04 ceph-mon[51427]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:31:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:36 vm04 ceph-mon[51427]: pgmap v146: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:36.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:36.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:36 vm09 ceph-mon[54744]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:31:36.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:36 vm09 ceph-mon[54744]: pgmap v146: 164 pgs: 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:37.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:37 vm04 ceph-mon[57581]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:31:37.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:37 vm04 ceph-mon[51427]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:31:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:37 vm09 ceph-mon[54744]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:31:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[57581]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[57581]: pgmap v149: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[51427]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:38 vm04 ceph-mon[51427]: pgmap v149: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:38 vm09 ceph-mon[54744]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T18:31:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:38 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:38 vm09 ceph-mon[54744]: pgmap v149: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:39.388 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_ns_objects PASSED [ 36%] 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[57581]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:39.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:39 vm04 ceph-mon[51427]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T18:31:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:39 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3149133936' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:39 vm09 ceph-mon[54744]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T18:31:40.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:40 vm04 ceph-mon[57581]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T18:31:40.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:40 vm04 ceph-mon[57581]: pgmap v152: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:40.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:40 vm04 ceph-mon[51427]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T18:31:40.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:40 vm04 ceph-mon[51427]: pgmap v152: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:40 vm09 ceph-mon[54744]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T18:31:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:40 vm09 ceph-mon[54744]: pgmap v152: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:41.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:41 vm04 ceph-mon[57581]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:31:41.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:41 vm04 ceph-mon[51427]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:31:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:41 vm09 ceph-mon[54744]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[57581]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[57581]: pgmap v155: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[51427]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[51427]: pgmap v155: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:42.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:42 vm09 ceph-mon[54744]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:31:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:42 vm09 ceph-mon[54744]: pgmap v155: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:43.430 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs PASSED [ 37%] 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:43.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:43 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3515916731' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:43 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:31:44.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:44 vm04 ceph-mon[57581]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T18:31:44.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:44 vm04 ceph-mon[57581]: pgmap v158: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:44 vm04 ceph-mon[51427]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T18:31:44.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:44 vm04 ceph-mon[51427]: pgmap v158: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:44 vm09 ceph-mon[54744]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T18:31:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:44 vm09 ceph-mon[54744]: pgmap v158: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:45.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:45 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:45.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:45 vm09 ceph-mon[54744]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T18:31:45.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:45 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:45.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:45 vm04 ceph-mon[57581]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T18:31:45.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:45 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:45.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:45 vm04 ceph-mon[51427]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T18:31:46.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4209501216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:46.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:46 vm09 ceph-mon[54744]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:31:46.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4209501216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[57581]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4209501216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:46 vm04 ceph-mon[51427]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:31:47.466 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_obj_xattrs PASSED [ 38%] 2026-03-09T18:31:47.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:47.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:47 vm09 ceph-mon[54744]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T18:31:47.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:47.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:47 vm04 ceph-mon[57581]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T18:31:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:47 vm04 ceph-mon[51427]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T18:31:48.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:48 vm09 ceph-mon[54744]: pgmap v163: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T18:31:48.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:48 vm04 ceph-mon[57581]: pgmap v163: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T18:31:48.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:48 vm04 ceph-mon[51427]: pgmap v163: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T18:31:49.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:49 vm09 ceph-mon[54744]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T18:31:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:49 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:49 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:49.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:49 vm09 ceph-mon[54744]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T18:31:49.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[57581]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[57581]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[51427]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2900806756' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:49.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:49 vm04 ceph-mon[51427]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T18:31:50.596 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_id PASSED [ 39%] 2026-03-09T18:31:50.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:50 vm04 ceph-mon[57581]: pgmap v166: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:50.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:50 vm04 ceph-mon[57581]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:31:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:50 vm04 ceph-mon[51427]: pgmap v166: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:50 vm04 ceph-mon[51427]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:31:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:50 vm09 ceph-mon[54744]: pgmap v166: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:50 vm09 ceph-mon[54744]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:31:52.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[57581]: pgmap v169: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[57581]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3597104263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[51427]: pgmap v169: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[51427]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3597104263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:52 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:52 vm09 ceph-mon[54744]: pgmap v169: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:31:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:52 vm09 ceph-mon[54744]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:31:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:52 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3597104263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:52 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:52 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:53.607 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_name PASSED [ 40%] 2026-03-09T18:31:53.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:53.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[57581]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:31:53.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[57581]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:31:53.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:53.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[51427]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:31:53.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:53 vm04 ceph-mon[51427]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:31:54.009 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:53 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:54.009 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:53 vm09 ceph-mon[54744]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:31:54.009 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:53 vm09 ceph-mon[54744]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:31:54.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:54 vm04 ceph-mon[57581]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:54.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:54 vm04 ceph-mon[51427]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:54 vm09 ceph-mon[54744]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:55.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:55 vm04 ceph-mon[57581]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:31:55.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:55 vm04 ceph-mon[51427]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:31:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:55 vm09 ceph-mon[54744]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:31:56.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:31:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:31:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:56 vm09 ceph-mon[54744]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:56.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:56 vm09 ceph-mon[54744]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:31:56.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:56 vm04 ceph-mon[57581]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:56.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:56 vm04 ceph-mon[57581]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:31:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:56 vm04 ceph-mon[51427]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:56 vm04 ceph-mon[51427]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[57581]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/650568544' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[51427]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/650568544' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:57.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:57 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:57 vm09 ceph-mon[54744]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:31:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:57 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/650568544' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:57 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:31:58.711 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_create_snap PASSED [ 41%] 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[57581]: pgmap v178: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[57581]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:31:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:31:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[51427]: pgmap v178: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[51427]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:31:58.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:58 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:31:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:58 vm09 ceph-mon[54744]: pgmap v178: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:58 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:31:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:58 vm09 ceph-mon[54744]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:31:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:58 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:31:59 vm09 ceph-mon[54744]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:32:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:31:59 vm04 ceph-mon[57581]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:32:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:31:59 vm04 ceph-mon[51427]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:32:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:00 vm09 ceph-mon[54744]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:00 vm09 ceph-mon[54744]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:32:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:00 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3717584815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:00 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:00 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[57581]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[57581]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3717584815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[51427]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[51427]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3717584815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:00 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:01.823 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps_empty PASSED [ 42%] 2026-03-09T18:32:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:01 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:02.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:01 vm09 ceph-mon[54744]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:32:02.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:01 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:02.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:01 vm04 ceph-mon[57581]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:32:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:01 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:02.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:01 vm04 ceph-mon[51427]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:32:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:02 vm04 ceph-mon[57581]: pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:02 vm04 ceph-mon[57581]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T18:32:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:02 vm04 ceph-mon[51427]: pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:02 vm04 ceph-mon[51427]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T18:32:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:02 vm09 ceph-mon[54744]: pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:02 vm09 ceph-mon[54744]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T18:32:04.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:03 vm04 ceph-mon[57581]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T18:32:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:03 vm04 ceph-mon[57581]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T18:32:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:03 vm04 ceph-mon[51427]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T18:32:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:03 vm04 ceph-mon[51427]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T18:32:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:03 vm09 ceph-mon[54744]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T18:32:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:03 vm09 ceph-mon[54744]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T18:32:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:04 vm04 ceph-mon[57581]: pgmap v187: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:04 vm04 ceph-mon[57581]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T18:32:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:04 vm04 ceph-mon[51427]: pgmap v187: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:04 vm04 ceph-mon[51427]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T18:32:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:04 vm09 ceph-mon[54744]: pgmap v187: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:04 vm09 ceph-mon[54744]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T18:32:06.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[51427]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[51427]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3009537289' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[57581]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[57581]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3009537289' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:06 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:06 vm09 ceph-mon[54744]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:06 vm09 ceph-mon[54744]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T18:32:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:06 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3009537289' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:06 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:07.973 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps PASSED [ 43%] 2026-03-09T18:32:08.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[57581]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T18:32:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:07 vm04 ceph-mon[51427]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T18:32:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:07 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:07 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:07 vm09 ceph-mon[54744]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T18:32:09.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:08 vm04 ceph-mon[57581]: pgmap v193: 196 pgs: 196 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:08 vm04 ceph-mon[57581]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T18:32:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:08 vm04 ceph-mon[51427]: pgmap v193: 196 pgs: 196 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:08 vm04 ceph-mon[51427]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T18:32:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:08 vm09 ceph-mon[54744]: pgmap v193: 196 pgs: 196 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:08 vm09 ceph-mon[54744]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T18:32:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:09 vm09 ceph-mon[54744]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T18:32:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:09 vm04 ceph-mon[51427]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T18:32:10.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:09 vm04 ceph-mon[57581]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T18:32:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:10 vm09 ceph-mon[54744]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:10 vm09 ceph-mon[54744]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T18:32:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:10 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3734686874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[51427]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[51427]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3734686874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[57581]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[57581]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3734686874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:12.175 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lookup_snap PASSED [ 45%] 2026-03-09T18:32:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:12 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:12 vm04 ceph-mon[57581]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T18:32:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:12 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:12 vm04 ceph-mon[51427]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T18:32:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:12 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:12 vm09 ceph-mon[54744]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T18:32:13.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:13 vm04 ceph-mon[51427]: pgmap v199: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:13.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:13 vm04 ceph-mon[51427]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T18:32:13.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:13 vm04 ceph-mon[57581]: pgmap v199: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:13.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:13 vm04 ceph-mon[57581]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T18:32:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:13 vm09 ceph-mon[54744]: pgmap v199: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:13 vm09 ceph-mon[54744]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T18:32:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:14 vm09 ceph-mon[54744]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T18:32:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:14 vm09 ceph-mon[54744]: pgmap v202: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:14 vm09 ceph-mon[54744]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T18:32:14.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[51427]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T18:32:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[51427]: pgmap v202: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[51427]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T18:32:14.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[57581]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T18:32:14.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[57581]: pgmap v202: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:14.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:14 vm04 ceph-mon[57581]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T18:32:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:15 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/997209217' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:15 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:15 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:15 vm09 ceph-mon[54744]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/997209217' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[57581]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/997209217' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:15 vm04 ceph-mon[51427]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T18:32:16.204 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_timestamp PASSED [ 46%] 2026-03-09T18:32:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:16 vm09 ceph-mon[54744]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:16 vm09 ceph-mon[54744]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[57581]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[57581]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[51427]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:16 vm04 ceph-mon[51427]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T18:32:17.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:17 vm09 ceph-mon[54744]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T18:32:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:17 vm04 ceph-mon[57581]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T18:32:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:17 vm04 ceph-mon[51427]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T18:32:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:18 vm09 ceph-mon[54744]: pgmap v208: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:18 vm09 ceph-mon[54744]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T18:32:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:18 vm04 ceph-mon[57581]: pgmap v208: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:18.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:18 vm04 ceph-mon[57581]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T18:32:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:18 vm04 ceph-mon[51427]: pgmap v208: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:18 vm04 ceph-mon[51427]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T18:32:19.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:20 vm09 ceph-mon[54744]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T18:32:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:20 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:20 vm09 ceph-mon[54744]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[57581]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[57581]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[51427]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:20 vm04 ceph-mon[51427]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:21.336 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_snap PASSED [ 47%] 2026-03-09T18:32:21.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:21 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:21.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:21 vm09 ceph-mon[54744]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T18:32:21.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:21 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:21.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:21 vm04 ceph-mon[57581]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T18:32:21.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:21 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/622064055' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:21.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:21 vm04 ceph-mon[51427]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T18:32:22.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:22 vm09 ceph-mon[54744]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T18:32:22.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:22 vm09 ceph-mon[54744]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:22 vm04 ceph-mon[57581]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T18:32:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:22 vm04 ceph-mon[57581]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:22 vm04 ceph-mon[51427]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T18:32:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:22 vm04 ceph-mon[51427]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:23.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:23 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:23.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:23 vm04 ceph-mon[57581]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T18:32:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:23 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:23 vm04 ceph-mon[51427]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T18:32:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:23 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:23 vm09 ceph-mon[54744]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[57581]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[57581]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[57581]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[51427]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[51427]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:24 vm04 ceph-mon[51427]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T18:32:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:24 vm09 ceph-mon[54744]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T18:32:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:24 vm09 ceph-mon[54744]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:24 vm09 ceph-mon[54744]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[57581]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2606624156' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[57581]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[51427]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2606624156' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:26 vm04 ceph-mon[51427]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:26.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:26 vm09 ceph-mon[54744]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T18:32:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:26 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2606624156' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:26 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:26 vm09 ceph-mon[54744]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:27.404 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback PASSED [ 48%] 2026-03-09T18:32:27.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:27.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[57581]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T18:32:27.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[51427]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T18:32:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:27 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:27 vm09 ceph-mon[54744]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T18:32:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:28 vm09 ceph-mon[54744]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T18:32:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:28 vm09 ceph-mon[54744]: pgmap v223: 164 pgs: 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:28 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:28 vm09 ceph-mon[54744]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[57581]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[57581]: pgmap v223: 164 pgs: 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[57581]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[51427]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[51427]: pgmap v223: 164 pgs: 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:32:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:28 vm04 ceph-mon[51427]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T18:32:29.216 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:30 vm04 ceph-mon[51427]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T18:32:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:30 vm04 ceph-mon[51427]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:30 vm04 ceph-mon[57581]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T18:32:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:30 vm04 ceph-mon[57581]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:30 vm09 ceph-mon[54744]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T18:32:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:30 vm09 ceph-mon[54744]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:31 vm09 ceph-mon[54744]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T18:32:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:31.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:31 vm04 ceph-mon[51427]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T18:32:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:31 vm04 ceph-mon[57581]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T18:32:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:32 vm09 ceph-mon[54744]: pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:32 vm09 ceph-mon[54744]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T18:32:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:32 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/419782878' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:32 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[51427]: pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[51427]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/419782878' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[57581]: pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[57581]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/419782878' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:32 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:33.517 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback_removed PASSED [ 49%] 2026-03-09T18:32:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:33 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:33 vm09 ceph-mon[54744]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T18:32:33.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:33 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:33.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:33 vm04 ceph-mon[57581]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T18:32:33.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:33 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:33.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:33 vm04 ceph-mon[51427]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T18:32:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:34 vm09 ceph-mon[54744]: pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 510 B/s wr, 1 op/s 2026-03-09T18:32:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:34 vm09 ceph-mon[54744]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T18:32:34.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:34 vm04 ceph-mon[51427]: pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 510 B/s wr, 1 op/s 2026-03-09T18:32:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:34 vm04 ceph-mon[51427]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T18:32:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:34 vm04 ceph-mon[57581]: pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 510 B/s wr, 1 op/s 2026-03-09T18:32:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:34 vm04 ceph-mon[57581]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T18:32:35.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:35 vm04 ceph-mon[57581]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T18:32:35.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:35 vm04 ceph-mon[51427]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T18:32:35.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:35 vm09 ceph-mon[54744]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T18:32:36.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:36.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:36 vm09 ceph-mon[54744]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:36.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:36 vm09 ceph-mon[54744]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T18:32:36.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:36 vm04 ceph-mon[57581]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:36.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:36 vm04 ceph-mon[57581]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T18:32:36.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:36 vm04 ceph-mon[51427]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 333 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:36.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:36 vm04 ceph-mon[51427]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T18:32:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:37 vm09 ceph-mon[54744]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T18:32:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:37.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:37 vm04 ceph-mon[57581]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T18:32:37.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:37.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:37 vm04 ceph-mon[51427]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T18:32:37.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:38 vm09 ceph-mon[54744]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:32:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:38 vm09 ceph-mon[54744]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T18:32:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:38 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[57581]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[57581]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[51427]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[51427]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T18:32:38.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:39.216 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:39.572 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_read PASSED [ 50%] 2026-03-09T18:32:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:39 vm09 ceph-mon[54744]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T18:32:39.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:39.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:39 vm04 ceph-mon[57581]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T18:32:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/391668458' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:39 vm04 ceph-mon[51427]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T18:32:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:40 vm04 ceph-mon[57581]: pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:40 vm04 ceph-mon[57581]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T18:32:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:40 vm04 ceph-mon[51427]: pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:40 vm04 ceph-mon[51427]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T18:32:41.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:40 vm09 ceph-mon[54744]: pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:41.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:40 vm09 ceph-mon[54744]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T18:32:41.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:41 vm04 ceph-mon[57581]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T18:32:41.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:41 vm04 ceph-mon[57581]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T18:32:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:41 vm04 ceph-mon[51427]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T18:32:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:41 vm04 ceph-mon[51427]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T18:32:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:41 vm09 ceph-mon[54744]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T18:32:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:41 vm09 ceph-mon[54744]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T18:32:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[57581]: pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:42.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[57581]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T18:32:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[51427]: pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:42 vm04 ceph-mon[51427]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T18:32:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:42 vm09 ceph-mon[54744]: pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 386 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2206729306' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:42 vm09 ceph-mon[54744]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T18:32:43.593 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap PASSED [ 51%] 2026-03-09T18:32:43.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[57581]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:32:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:43 vm04 ceph-mon[51427]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T18:32:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:43 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:32:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:43 vm09 ceph-mon[54744]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T18:32:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:44 vm04 ceph-mon[57581]: pgmap v246: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:44 vm04 ceph-mon[57581]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T18:32:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:44 vm04 ceph-mon[51427]: pgmap v246: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:44 vm04 ceph-mon[51427]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T18:32:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:44 vm09 ceph-mon[54744]: pgmap v246: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:44 vm09 ceph-mon[54744]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[57581]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[57581]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3769686789' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[51427]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[51427]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3769686789' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:46 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:47.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:46 vm09 ceph-mon[54744]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:46 vm09 ceph-mon[54744]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T18:32:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:46 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3769686789' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:46 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:47.650 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap_aio PASSED [ 52%] 2026-03-09T18:32:47.659 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:47.659 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[51427]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T18:32:47.659 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:47.660 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:47.660 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[57581]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T18:32:47.660 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:47 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:47 vm09 ceph-mon[54744]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T18:32:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:48.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:48 vm04 ceph-mon[57581]: pgmap v252: 196 pgs: 196 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:48 vm04 ceph-mon[57581]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T18:32:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:48 vm04 ceph-mon[51427]: pgmap v252: 196 pgs: 196 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:48 vm04 ceph-mon[51427]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T18:32:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:48 vm09 ceph-mon[54744]: pgmap v252: 196 pgs: 196 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:32:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:48 vm09 ceph-mon[54744]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T18:32:49.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:49 vm04 ceph-mon[57581]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T18:32:49.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:49 vm04 ceph-mon[51427]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T18:32:50.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:49 vm09 ceph-mon[54744]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T18:32:50.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[57581]: pgmap v255: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:50.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[57581]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T18:32:50.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3484594074' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:50.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[51427]: pgmap v255: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[51427]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T18:32:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3484594074' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:50 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:50 vm09 ceph-mon[54744]: pgmap v255: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:50 vm09 ceph-mon[54744]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T18:32:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:50 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3484594074' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:50 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:51.703 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_ops PASSED [ 53%] 2026-03-09T18:32:51.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:51 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:51.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:51 vm04 ceph-mon[57581]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T18:32:51.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:51 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:51 vm04 ceph-mon[51427]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T18:32:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:51 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:51 vm09 ceph-mon[54744]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T18:32:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:52 vm09 ceph-mon[54744]: pgmap v258: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:52 vm09 ceph-mon[54744]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T18:32:53.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:52 vm04 ceph-mon[57581]: pgmap v258: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:53.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:52 vm04 ceph-mon[57581]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T18:32:53.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:52 vm04 ceph-mon[51427]: pgmap v258: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 405 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:32:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:52 vm04 ceph-mon[51427]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T18:32:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:53 vm09 ceph-mon[54744]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T18:32:54.118 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:53 vm04 ceph-mon[57581]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T18:32:54.118 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:53 vm04 ceph-mon[51427]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: pgmap v261: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4172401862' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:54 vm09 ceph-mon[54744]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T18:32:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: pgmap v261: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T18:32:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4172401862' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[57581]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: pgmap v261: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4172401862' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:32:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:54 vm04 ceph-mon[51427]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T18:32:55.761 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute_op PASSED [ 54%] 2026-03-09T18:32:57.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:32:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:32:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:56 vm09 ceph-mon[54744]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:56 vm09 ceph-mon[54744]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T18:32:57.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:56 vm04 ceph-mon[57581]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:57.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:56 vm04 ceph-mon[57581]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T18:32:57.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:56 vm04 ceph-mon[51427]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:57.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:56 vm04 ceph-mon[51427]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T18:32:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:57 vm09 ceph-mon[54744]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T18:32:58.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:58.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:57 vm04 ceph-mon[57581]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T18:32:58.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:58.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:57 vm04 ceph-mon[51427]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T18:32:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:58 vm09 ceph-mon[54744]: pgmap v267: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:58 vm09 ceph-mon[54744]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T18:32:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:58 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4264162980' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:58 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[57581]: pgmap v267: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[57581]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T18:32:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4264162980' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.216 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:32:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:32:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:32:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[51427]: pgmap v267: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[51427]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T18:32:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4264162980' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:58 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:32:59.834 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame_op PASSED [ 56%] 2026-03-09T18:33:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:59 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:59 vm09 ceph-mon[54744]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T18:33:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:32:59 vm09 ceph-mon[54744]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T18:33:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[57581]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T18:33:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[57581]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T18:33:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[51427]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T18:33:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:32:59 vm04 ceph-mon[51427]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T18:33:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[57581]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[57581]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T18:33:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[51427]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:00 vm04 ceph-mon[51427]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T18:33:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:00 vm09 ceph-mon[54744]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:00 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:00 vm09 ceph-mon[54744]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T18:33:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:02 vm09 ceph-mon[54744]: pgmap v273: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:02 vm09 ceph-mon[54744]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T18:33:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:02 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1748668966' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:02 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[57581]: pgmap v273: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[57581]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1748668966' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[51427]: pgmap v273: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 406 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[51427]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1748668966' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:02 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:03.864 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_vals_by_keys PASSED [ 57%] 2026-03-09T18:33:04.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:03 vm04 ceph-mon[57581]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T18:33:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:03 vm04 ceph-mon[51427]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T18:33:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:03 vm09 ceph-mon[54744]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T18:33:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:04 vm04 ceph-mon[57581]: pgmap v276: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:04 vm04 ceph-mon[57581]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T18:33:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:04 vm04 ceph-mon[51427]: pgmap v276: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:04 vm04 ceph-mon[51427]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T18:33:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:04 vm09 ceph-mon[54744]: pgmap v276: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:04 vm09 ceph-mon[54744]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T18:33:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:05 vm04 ceph-mon[57581]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T18:33:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:05 vm04 ceph-mon[51427]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T18:33:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:05 vm09 ceph-mon[54744]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T18:33:07.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:06 vm09 ceph-mon[54744]: pgmap v279: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:06 vm09 ceph-mon[54744]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T18:33:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:06 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3753700572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:06 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[57581]: pgmap v279: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[57581]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3753700572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[51427]: pgmap v279: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 411 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[51427]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3753700572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:06 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:07.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_keys PASSED [ 58%] 2026-03-09T18:33:08.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[57581]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T18:33:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:07 vm04 ceph-mon[51427]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T18:33:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:07 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:07 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:07 vm09 ceph-mon[54744]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T18:33:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:08 vm04 ceph-mon[57581]: pgmap v282: 196 pgs: 196 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:33:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:08 vm04 ceph-mon[57581]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T18:33:09.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:08 vm04 ceph-mon[51427]: pgmap v282: 196 pgs: 196 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:33:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:08 vm04 ceph-mon[51427]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T18:33:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:08 vm09 ceph-mon[54744]: pgmap v282: 196 pgs: 196 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:33:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:08 vm09 ceph-mon[54744]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T18:33:10.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:09 vm04 ceph-mon[57581]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T18:33:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:09 vm04 ceph-mon[51427]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T18:33:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:09 vm09 ceph-mon[54744]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T18:33:11.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[57581]: pgmap v285: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[57581]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1130626417' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[51427]: pgmap v285: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[51427]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1130626417' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:10 vm09 ceph-mon[54744]: pgmap v285: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:10 vm09 ceph-mon[54744]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T18:33:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:10 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1130626417' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:11.947 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_clear_omap PASSED [ 59%] 2026-03-09T18:33:12.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:11 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:12.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:11 vm04 ceph-mon[57581]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T18:33:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:11 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:12.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:11 vm04 ceph-mon[51427]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T18:33:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:11 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:12.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:11 vm09 ceph-mon[54744]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T18:33:13.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:12 vm09 ceph-mon[54744]: pgmap v288: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:13.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:12 vm09 ceph-mon[54744]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T18:33:13.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:12 vm04 ceph-mon[57581]: pgmap v288: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:13.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:12 vm04 ceph-mon[57581]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T18:33:13.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:12 vm04 ceph-mon[51427]: pgmap v288: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 424 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:13.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:12 vm04 ceph-mon[51427]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T18:33:14.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:13 vm09 ceph-mon[54744]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T18:33:14.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:13 vm04 ceph-mon[57581]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T18:33:14.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:13 vm04 ceph-mon[51427]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T18:33:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:14 vm09 ceph-mon[54744]: pgmap v291: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:14 vm09 ceph-mon[54744]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T18:33:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:14 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1930186500' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:15.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:14 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[57581]: pgmap v291: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[57581]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1930186500' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[51427]: pgmap v291: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[51427]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1930186500' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:15.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:14 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:16.023 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_omap_range2 PASSED [ 60%] 2026-03-09T18:33:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:16 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:16 vm09 ceph-mon[54744]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T18:33:16.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:16.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:16.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[57581]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T18:33:16.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[51427]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T18:33:16.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:17.009 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:17.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:17 vm09 ceph-mon[54744]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:17.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:17 vm09 ceph-mon[54744]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T18:33:17.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:17 vm04 ceph-mon[57581]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:17.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:17 vm04 ceph-mon[57581]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T18:33:17.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:17 vm04 ceph-mon[51427]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 428 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:17.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:17 vm04 ceph-mon[51427]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T18:33:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:18 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:18.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:18 vm09 ceph-mon[54744]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T18:33:18.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:18 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:18.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:18 vm04 ceph-mon[57581]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T18:33:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:18 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:18.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:18 vm04 ceph-mon[51427]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[57581]: pgmap v297: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[57581]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4227858998' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[51427]: pgmap v297: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[51427]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4227858998' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:19.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:19 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:19.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:19 vm09 ceph-mon[54744]: pgmap v297: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:19.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:19 vm09 ceph-mon[54744]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T18:33:19.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:19 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4227858998' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:19.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:19 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:20.111 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_omap_cmp PASSED [ 61%] 2026-03-09T18:33:20.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:20 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:20.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:20 vm09 ceph-mon[54744]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T18:33:20.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:20 vm09 ceph-mon[54744]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[57581]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[57581]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[51427]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T18:33:20.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:20 vm04 ceph-mon[51427]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:21.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:21 vm09 ceph-mon[54744]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T18:33:21.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:21 vm04 ceph-mon[57581]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T18:33:21.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:21 vm04 ceph-mon[51427]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T18:33:22.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:22 vm04 ceph-mon[57581]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T18:33:22.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:22 vm04 ceph-mon[57581]: pgmap v303: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:22 vm04 ceph-mon[51427]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T18:33:22.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:22 vm04 ceph-mon[51427]: pgmap v303: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:22.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:22 vm09 ceph-mon[54744]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T18:33:22.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:22 vm09 ceph-mon[54744]: pgmap v303: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 429 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[57581]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1105995767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[51427]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1105995767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:23.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:23 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:23.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:23 vm09 ceph-mon[54744]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T18:33:23.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:23 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1105995767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:23.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:23 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:24.198 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext_op PASSED [ 62%] 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[57581]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[57581]: pgmap v306: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[51427]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T18:33:24.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:24 vm04 ceph-mon[51427]: pgmap v306: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:24.509 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:24 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:24.509 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:24 vm09 ceph-mon[54744]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T18:33:24.509 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:24 vm09 ceph-mon[54744]: pgmap v306: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:25.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:25 vm04 ceph-mon[57581]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T18:33:25.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:25 vm04 ceph-mon[51427]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T18:33:25.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:25 vm09 ceph-mon[54744]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T18:33:26.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:26 vm04 ceph-mon[57581]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T18:33:26.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:26 vm04 ceph-mon[57581]: pgmap v309: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:26.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:26 vm04 ceph-mon[51427]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T18:33:26.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:26 vm04 ceph-mon[51427]: pgmap v309: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:26.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:26 vm09 ceph-mon[54744]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T18:33:26.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:26 vm09 ceph-mon[54744]: pgmap v309: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 434 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:27.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[57581]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[51427]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:27.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:27 vm09 ceph-mon[54744]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T18:33:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:27.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:28.212 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs_op PASSED [ 63%] 2026-03-09T18:33:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:28 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:28 vm09 ceph-mon[54744]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T18:33:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:28 vm09 ceph-mon[54744]: pgmap v312: 196 pgs: 196 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T18:33:28.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:28 vm09 ceph-mon[54744]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T18:33:28.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[57581]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[57581]: pgmap v312: 196 pgs: 196 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[57581]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2489370755' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[51427]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[51427]: pgmap v312: 196 pgs: 196 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T18:33:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:28 vm04 ceph-mon[51427]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T18:33:29.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:30 vm09 ceph-mon[54744]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T18:33:30.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:30 vm09 ceph-mon[54744]: pgmap v315: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:30 vm04 ceph-mon[57581]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T18:33:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:30 vm04 ceph-mon[57581]: pgmap v315: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:30 vm04 ceph-mon[51427]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T18:33:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:30 vm04 ceph-mon[51427]: pgmap v315: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:31 vm09 ceph-mon[54744]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T18:33:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:31 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3319635152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:31 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[57581]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3319635152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[51427]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3319635152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:31.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:32.256 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_locator PASSED [ 64%] 2026-03-09T18:33:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:32 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:32 vm09 ceph-mon[54744]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T18:33:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:32 vm09 ceph-mon[54744]: pgmap v318: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:32.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:32 vm09 ceph-mon[54744]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T18:33:32.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[57581]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[57581]: pgmap v318: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[57581]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[51427]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[51427]: pgmap v318: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 438 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:32 vm04 ceph-mon[51427]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T18:33:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:34 vm09 ceph-mon[54744]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T18:33:34.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:34 vm09 ceph-mon[54744]: pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:34.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:34 vm04 ceph-mon[57581]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T18:33:34.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:34 vm04 ceph-mon[57581]: pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:34 vm04 ceph-mon[51427]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T18:33:34.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:34 vm04 ceph-mon[51427]: pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:35 vm09 ceph-mon[54744]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T18:33:35.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:35 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:35.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:35 vm04 ceph-mon[57581]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T18:33:35.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:35 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:35.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:35 vm04 ceph-mon[51427]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T18:33:35.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:35 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:36.298 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_operate_aio_write_op PASSED [ 65%] 2026-03-09T18:33:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:36 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:36 vm09 ceph-mon[54744]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T18:33:36.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:36 vm09 ceph-mon[54744]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:36.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:36.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[57581]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T18:33:36.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[57581]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3901002689' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[51427]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T18:33:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:36 vm04 ceph-mon[51427]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:37.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:37.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:37 vm09 ceph-mon[54744]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T18:33:37.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:37.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:37 vm04 ceph-mon[57581]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T18:33:37.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:37 vm04 ceph-mon[51427]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T18:33:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:38.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:38 vm09 ceph-mon[54744]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T18:33:38.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:38 vm09 ceph-mon[54744]: pgmap v327: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:38.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:38 vm04 ceph-mon[57581]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T18:33:38.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:38 vm04 ceph-mon[57581]: pgmap v327: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:38 vm04 ceph-mon[51427]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T18:33:38.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:38 vm04 ceph-mon[51427]: pgmap v327: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:39 vm09 ceph-mon[54744]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T18:33:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/308777020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:39 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:39.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[57581]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T18:33:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/308777020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:39.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[51427]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T18:33:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/308777020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:39 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:40.530 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write PASSED [ 67%] 2026-03-09T18:33:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:40 vm09 ceph-mon[54744]: pgmap v329: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:40 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:40 vm09 ceph-mon[54744]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T18:33:40.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[57581]: pgmap v329: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[57581]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T18:33:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[51427]: pgmap v329: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:40 vm04 ceph-mon[51427]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T18:33:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:41 vm09 ceph-mon[54744]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T18:33:41.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:41 vm04 ceph-mon[57581]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T18:33:41.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:41 vm04 ceph-mon[51427]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T18:33:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:42 vm09 ceph-mon[54744]: pgmap v332: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:42 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:42.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:42 vm09 ceph-mon[54744]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T18:33:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[57581]: pgmap v332: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:42.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:42.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[57581]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T18:33:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[51427]: pgmap v332: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:42 vm04 ceph-mon[51427]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T18:33:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:43 vm09 ceph-mon[54744]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T18:33:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:43 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1797558263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:43 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[57581]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1797558263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[51427]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1797558263' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:44.593 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_cmpext PASSED [ 68%] 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: pgmap v335: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:44.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:44 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: pgmap v335: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: pgmap v335: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:44 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:33:45.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:45 vm04 ceph-mon[57581]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T18:33:45.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:45 vm04 ceph-mon[51427]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T18:33:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:45 vm09 ceph-mon[54744]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T18:33:46.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[57581]: pgmap v338: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[57581]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T18:33:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[51427]: pgmap v338: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[51427]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T18:33:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:47.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:46 vm09 ceph-mon[54744]: pgmap v338: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:46 vm09 ceph-mon[54744]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T18:33:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[57581]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2119432951' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[51427]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2119432951' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:47 vm09 ceph-mon[54744]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T18:33:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:47 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2119432951' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:47 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:48.641 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_rmxattr PASSED [ 69%] 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[57581]: pgmap v341: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[57581]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[51427]: pgmap v341: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:48.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:48.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:48.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:48 vm04 ceph-mon[51427]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T18:33:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:48 vm09 ceph-mon[54744]: pgmap v341: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:48 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:48 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:48 vm09 ceph-mon[54744]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T18:33:49.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:49.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:49 vm04 ceph-mon[57581]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T18:33:49.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:49 vm04 ceph-mon[51427]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T18:33:50.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:49 vm09 ceph-mon[54744]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T18:33:50.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:50 vm04 ceph-mon[57581]: pgmap v344: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:50.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:50 vm04 ceph-mon[57581]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T18:33:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:50 vm04 ceph-mon[51427]: pgmap v344: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:50.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:50 vm04 ceph-mon[51427]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T18:33:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:50 vm09 ceph-mon[54744]: pgmap v344: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:50 vm09 ceph-mon[54744]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[57581]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2125976631' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[57581]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[51427]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2125976631' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:51.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:51 vm04 ceph-mon[51427]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T18:33:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:51 vm09 ceph-mon[54744]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T18:33:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:51 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2125976631' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:51 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:51 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:51 vm09 ceph-mon[54744]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T18:33:52.673 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_no_comp_ref PASSED [ 70%] 2026-03-09T18:33:52.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:52 vm04 ceph-mon[57581]: pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:52.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:52 vm04 ceph-mon[57581]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T18:33:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:52 vm04 ceph-mon[51427]: pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:52.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:52 vm04 ceph-mon[51427]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T18:33:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:52 vm09 ceph-mon[54744]: pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:33:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:52 vm09 ceph-mon[54744]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[51427]: pgmap v350: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[51427]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[57581]: pgmap v350: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[57581]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T18:33:54.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:54 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:54 vm09 ceph-mon[54744]: pgmap v350: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:54 vm09 ceph-mon[54744]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T18:33:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:54 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[51427]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2600315304' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[57581]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2600315304' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:55.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:55 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:55 vm09 ceph-mon[54744]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T18:33:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:55 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2600315304' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:55 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:33:56.706 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_append PASSED [ 71%] 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[51427]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[51427]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[57581]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:56.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:56 vm04 ceph-mon[57581]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T18:33:57.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:33:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:33:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:56 vm09 ceph-mon[54744]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:56 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:33:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:56 vm09 ceph-mon[54744]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T18:33:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:58.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:57 vm09 ceph-mon[54744]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T18:33:58.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:57 vm04 ceph-mon[57581]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T18:33:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:57 vm04 ceph-mon[51427]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T18:33:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:58 vm09 ceph-mon[54744]: pgmap v356: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:59.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:58 vm09 ceph-mon[54744]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T18:33:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:58 vm04 ceph-mon[57581]: pgmap v356: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:58 vm04 ceph-mon[57581]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T18:33:59.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:33:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:33:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:33:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:58 vm04 ceph-mon[51427]: pgmap v356: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:58 vm04 ceph-mon[51427]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T18:34:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:59 vm09 ceph-mon[54744]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T18:34:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:59 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3228619988' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:59 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:59 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:00.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:33:59 vm09 ceph-mon[54744]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[51427]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3228619988' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[51427]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[57581]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3228619988' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:33:59 vm04 ceph-mon[57581]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T18:34:00.736 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_full PASSED [ 72%] 2026-03-09T18:34:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:00 vm09 ceph-mon[54744]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:00 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:00 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:01.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:00 vm09 ceph-mon[54744]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T18:34:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[57581]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[57581]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T18:34:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[51427]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:00 vm04 ceph-mon[51427]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T18:34:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:02 vm09 ceph-mon[54744]: pgmap v362: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:03.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:02 vm09 ceph-mon[54744]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T18:34:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:02 vm04 ceph-mon[57581]: pgmap v362: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:02 vm04 ceph-mon[57581]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T18:34:03.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:02 vm04 ceph-mon[51427]: pgmap v362: 164 pgs: 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:02 vm04 ceph-mon[51427]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T18:34:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:03 vm09 ceph-mon[54744]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T18:34:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1443283918' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[57581]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T18:34:04.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1443283918' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[51427]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T18:34:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1443283918' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:04.809 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_writesame PASSED [ 73%] 2026-03-09T18:34:05.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:04 vm09 ceph-mon[54744]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:05.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:04 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:05.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:04 vm09 ceph-mon[54744]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T18:34:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[57581]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[57581]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T18:34:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[51427]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:04 vm04 ceph-mon[51427]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T18:34:06.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:05 vm09 ceph-mon[54744]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T18:34:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:05 vm04 ceph-mon[57581]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T18:34:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:05 vm04 ceph-mon[51427]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T18:34:07.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:06 vm09 ceph-mon[54744]: pgmap v368: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:06 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:06 vm09 ceph-mon[54744]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T18:34:07.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[57581]: pgmap v368: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:07.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:07.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[57581]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T18:34:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[51427]: pgmap v368: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:07.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:06 vm04 ceph-mon[51427]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T18:34:08.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:08.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[57581]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1824420576' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[51427]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1824420576' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:07 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:07 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:07 vm09 ceph-mon[54744]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T18:34:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:07 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1824420576' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:07 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:08.921 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_stat PASSED [ 74%] 2026-03-09T18:34:09.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[57581]: pgmap v371: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[57581]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[51427]: pgmap v371: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:08 vm04 ceph-mon[51427]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T18:34:09.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:08 vm09 ceph-mon[54744]: pgmap v371: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:08 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:09.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:08 vm09 ceph-mon[54744]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T18:34:10.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:09 vm04 ceph-mon[57581]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T18:34:10.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:09 vm04 ceph-mon[51427]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T18:34:10.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:09 vm09 ceph-mon[54744]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T18:34:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:11 vm09 ceph-mon[54744]: pgmap v374: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:11.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:11 vm09 ceph-mon[54744]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T18:34:11.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:11 vm04 ceph-mon[57581]: pgmap v374: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:11.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:11 vm04 ceph-mon[57581]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T18:34:11.466 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:11 vm04 ceph-mon[51427]: pgmap v374: 164 pgs: 164 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:11 vm04 ceph-mon[51427]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T18:34:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:12 vm09 ceph-mon[54744]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T18:34:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:12 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3546771232' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:12 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:12 vm09 ceph-mon[54744]: pgmap v377: 196 pgs: 28 unknown, 168 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:12.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[57581]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T18:34:12.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3546771232' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[57581]: pgmap v377: 196 pgs: 28 unknown, 168 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:12.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[51427]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T18:34:12.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3546771232' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:12.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:12 vm04 ceph-mon[51427]: pgmap v377: 196 pgs: 28 unknown, 168 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:13.242 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_remove PASSED [ 75%] 2026-03-09T18:34:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:13 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:13 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:13 vm09 ceph-mon[54744]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T18:34:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:13.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[57581]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T18:34:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:13 vm04 ceph-mon[51427]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T18:34:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:14 vm09 ceph-mon[54744]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T18:34:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:14 vm09 ceph-mon[54744]: pgmap v380: 164 pgs: 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:14.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:14 vm04 ceph-mon[57581]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T18:34:14.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:14 vm04 ceph-mon[57581]: pgmap v380: 164 pgs: 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:14 vm04 ceph-mon[51427]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T18:34:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:14 vm04 ceph-mon[51427]: pgmap v380: 164 pgs: 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:15 vm09 ceph-mon[54744]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T18:34:15.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:15 vm04 ceph-mon[57581]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T18:34:15.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:15 vm04 ceph-mon[51427]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:16.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 448 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:16 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51423]: 2026-03-09T18:34:16.319+0000 7fb2a62da640 -1 mon.a@0(leader).osd e287 definitely_dead 0 2026-03-09T18:34:17.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T18:34:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:18 vm09 ceph-mon[54744]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:18 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T18:34:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:18 vm09 ceph-mon[54744]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T18:34:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:18 vm09 ceph-mon[54744]: pgmap v386: 196 pgs: 73 stale+active+clean, 123 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:18 vm09 ceph-mon[54744]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T18:34:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[57581]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T18:34:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[57581]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T18:34:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[57581]: pgmap v386: 196 pgs: 73 stale+active+clean, 123 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[57581]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[51427]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[51427]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[51427]: pgmap v386: 196 pgs: 73 stale+active+clean, 123 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:18 vm04 ceph-mon[51427]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T18:34:19.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:19.341 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:19 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:19.054+0000 7f49b8206640 -1 osd.7 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:19 vm09 ceph-mon[54744]: osd.7 marked itself dead as of e289 2026-03-09T18:34:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:19 vm09 ceph-mon[54744]: osd.0 marked itself dead as of e289 2026-03-09T18:34:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:19 vm09 ceph-mon[54744]: osd.4 marked itself dead as of e289 2026-03-09T18:34:19.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:19 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:19.351+0000 7f49ab5e3640 -1 osd.7 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:19.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[57581]: osd.7 marked itself dead as of e289 2026-03-09T18:34:19.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[57581]: osd.0 marked itself dead as of e289 2026-03-09T18:34:19.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[57581]: osd.4 marked itself dead as of e289 2026-03-09T18:34:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[51427]: osd.7 marked itself dead as of e289 2026-03-09T18:34:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[51427]: osd.0 marked itself dead as of e289 2026-03-09T18:34:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:19 vm04 ceph-mon[51427]: osd.4 marked itself dead as of e289 2026-03-09T18:34:20.217 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:34:19 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:34:19.937+0000 7f61b5e82640 -1 osd.0 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:20.356 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:34:20 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:34:20.081+0000 7fe10bd25640 -1 osd.4 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:20.608 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:34:20 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:34:20.360+0000 7fe10733c640 -1 osd.4 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:20.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:20 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:20.364+0000 7f49b381d640 -1 osd.7 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: pgmap v389: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:20 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T18:34:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: pgmap v389: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:20.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:34:20 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:34:20.369+0000 7f61b1499640 -1 osd.0 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: map e289 wrongly marked me down at e288 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: pgmap v389: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:20.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:20 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:21.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[57581]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:21.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:21.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[57581]: osdmap e291: 8 total, 5 up, 8 in 2026-03-09T18:34:21.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[51427]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:21.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:21.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:21 vm04 ceph-mon[51427]: osdmap e291: 8 total, 5 up, 8 in 2026-03-09T18:34:21.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:21 vm09 ceph-mon[54744]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:21.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:21 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:21.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:21 vm09 ceph-mon[54744]: osdmap e291: 8 total, 5 up, 8 in 2026-03-09T18:34:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: pgmap v392: 196 pgs: 10 peering, 84 stale+active+clean, 102 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[57581]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: pgmap v392: 196 pgs: 10 peering, 84 stale+active+clean, 102 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:22 vm04 ceph-mon[51427]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: osd.4 [v2:192.168.123.109:6800/2821151016,v1:192.168.123.109:6801/2821151016] boot 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: osd.0 [v2:192.168.123.104:6802/1654539160,v1:192.168.123.104:6803/1654539160] boot 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: pgmap v392: 196 pgs: 10 peering, 84 stale+active+clean, 102 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:22 vm09 ceph-mon[54744]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T18:34:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:23 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:23.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:23 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:23 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:23 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:23 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3259631630' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:23 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:24.435 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete PASSED [ 76%] 2026-03-09T18:34:24.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[57581]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T18:34:24.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[57581]: pgmap v395: 196 pgs: 163 peering, 33 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[51427]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T18:34:24.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:24 vm04 ceph-mon[51427]: pgmap v395: 196 pgs: 163 peering, 33 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:24 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:24 vm09 ceph-mon[54744]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T18:34:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:24 vm09 ceph-mon[54744]: pgmap v395: 196 pgs: 163 peering, 33 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:25.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:25 vm09 ceph-mon[54744]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T18:34:25.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:25 vm04 ceph-mon[57581]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T18:34:25.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:25 vm04 ceph-mon[51427]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T18:34:26.858 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:26 vm09 ceph-mon[54744]: pgmap v398: 196 pgs: 32 unknown, 137 peering, 27 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:26 vm09 ceph-mon[54744]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T18:34:26.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:26 vm09 ceph-mon[54744]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T18:34:26.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[57581]: pgmap v398: 196 pgs: 32 unknown, 137 peering, 27 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:26.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[57581]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T18:34:26.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[57581]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T18:34:26.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[51427]: pgmap v398: 196 pgs: 32 unknown, 137 peering, 27 active+clean; 455 KiB data, 449 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:26.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[51427]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T18:34:26.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:26 vm04 ceph-mon[51427]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T18:34:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:27 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:27.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:27.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51423]: 2026-03-09T18:34:27.478+0000 7fb2a62da640 -1 mon.a@0(leader).osd e298 definitely_dead 0 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T18:34:27.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:27 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T18:34:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:28 vm09 ceph-mon[54744]: pgmap v400: 196 pgs: 2 creating+activating, 30 creating+peering, 164 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:28 vm09 ceph-mon[54744]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:28 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T18:34:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:28 vm09 ceph-mon[54744]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[57581]: pgmap v400: 196 pgs: 2 creating+activating, 30 creating+peering, 164 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[57581]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[57581]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[51427]: pgmap v400: 196 pgs: 2 creating+activating, 30 creating+peering, 164 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[51427]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T18:34:28.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:28 vm04 ceph-mon[51427]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T18:34:29.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:30 vm09 ceph-mon[54744]: pgmap v403: 196 pgs: 16 stale+creating+peering, 52 stale+active+clean, 2 creating+activating, 14 creating+peering, 112 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:30 vm09 ceph-mon[54744]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T18:34:30.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:30 vm04 ceph-mon[57581]: pgmap v403: 196 pgs: 16 stale+creating+peering, 52 stale+active+clean, 2 creating+activating, 14 creating+peering, 112 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:30.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:30 vm04 ceph-mon[57581]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T18:34:30.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:34:30 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:34:30.748+0000 7f12ba62e640 -1 osd.2 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:30.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:30 vm04 ceph-mon[51427]: pgmap v403: 196 pgs: 16 stale+creating+peering, 52 stale+active+clean, 2 creating+activating, 14 creating+peering, 112 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:30.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:30 vm04 ceph-mon[51427]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T18:34:31.358 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:31.046+0000 7f49b79f3640 -1 osd.7 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: osd.7 marked itself dead as of e300 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: osd.2 marked itself dead as of e300 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:31 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:31.546+0000 7f49b381d640 -1 osd.7 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:31.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:31.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: osd.7 marked itself dead as of e300 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: osd.2 marked itself dead as of e300 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: osd.7 marked itself dead as of e300 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: map e300 wrongly marked me down at e299 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: osd.2 marked itself dead as of e300 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:31 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:31.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:34:31 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:34:31.543+0000 7f12b5c57640 -1 osd.2 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: pgmap v405: 196 pgs: 4 undersized+degraded+peered+wait, 5 active+undersized+degraded+wait, 14 stale+creating+peering, 42 stale+active+clean, 2 creating+activating, 9 creating+peering, 4 undersized+peered+wait, 23 active+undersized+wait, 93 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 48/597 objects degraded (8.040%) 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:34:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:32 vm09 ceph-mon[54744]: Health check failed: Degraded data redundancy: 48/597 objects degraded (8.040%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:32.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: pgmap v405: 196 pgs: 4 undersized+degraded+peered+wait, 5 active+undersized+degraded+wait, 14 stale+creating+peering, 42 stale+active+clean, 2 creating+activating, 9 creating+peering, 4 undersized+peered+wait, 23 active+undersized+wait, 93 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 48/597 objects degraded (8.040%) 2026-03-09T18:34:32.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[57581]: Health check failed: Degraded data redundancy: 48/597 objects degraded (8.040%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: pgmap v405: 196 pgs: 4 undersized+degraded+peered+wait, 5 active+undersized+degraded+wait, 14 stale+creating+peering, 42 stale+active+clean, 2 creating+activating, 9 creating+peering, 4 undersized+peered+wait, 23 active+undersized+wait, 93 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 48/597 objects degraded (8.040%) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:34:32.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:32 vm04 ceph-mon[51427]: Health check failed: Degraded data redundancy: 48/597 objects degraded (8.040%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:33.358 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:34:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:34:32.891+0000 7fdd9b169640 -1 osd.5 302 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: osdmap e302: 8 total, 7 up, 8 in 2026-03-09T18:34:33.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:33 vm09 ceph-mon[54744]: osd.5 marked itself dead as of e302 2026-03-09T18:34:33.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: osdmap e302: 8 total, 7 up, 8 in 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[57581]: osd.5 marked itself dead as of e302 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: osdmap e302: 8 total, 7 up, 8 in 2026-03-09T18:34:33.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:33 vm04 ceph-mon[51427]: osd.5 marked itself dead as of e302 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: map e302 wrongly marked me down at e299 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: pgmap v408: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:34:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:34 vm09 ceph-mon[54744]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T18:34:34.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T18:34:34.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: map e302 wrongly marked me down at e299 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: pgmap v408: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[57581]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: map e302 wrongly marked me down at e299 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: pgmap v408: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: osd.5 [v2:192.168.123.109:6808/3792197053,v1:192.168.123.109:6809/3792197053] boot 2026-03-09T18:34:34.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:34 vm04 ceph-mon[51427]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T18:34:35.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[57581]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T18:34:35.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:35.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:35.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[51427]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T18:34:35.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:35.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:35 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:36.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:35 vm09 ceph-mon[54744]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T18:34:36.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:35 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3879042302' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:36.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:35 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:36.658 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb PASSED [ 78%] 2026-03-09T18:34:36.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[57581]: pgmap v411: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:36.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:36.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[57581]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T18:34:36.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[51427]: pgmap v411: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:36.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:36.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:36 vm04 ceph-mon[51427]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T18:34:37.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:37.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:36 vm09 ceph-mon[54744]: pgmap v411: 196 pgs: 6 undersized+degraded+peered+wait, 24 active+undersized+degraded+wait, 2 stale+creating+peering, 21 stale+active+clean, 29 undersized+peered+wait, 76 active+undersized+wait, 38 active+clean; 455 KiB data, 450 MiB used, 160 GiB / 160 GiB avail; 161/597 objects degraded (26.968%) 2026-03-09T18:34:37.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:36 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:37.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:36 vm09 ceph-mon[54744]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T18:34:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:37 vm09 ceph-mon[54744]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T18:34:38.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:38.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:37 vm04 ceph-mon[57581]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T18:34:38.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:37 vm04 ceph-mon[51427]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T18:34:38.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: pgmap v414: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:38 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: pgmap v414: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: pgmap v414: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51423]: 2026-03-09T18:34:38.762+0000 7fb2a62da640 -1 mon.a@0(leader).osd e308 definitely_dead 0 2026-03-09T18:34:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:40.033 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:39 vm09 ceph-mon[54744]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:40.033 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:39 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:40.033 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:39 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.033 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:39 vm09 ceph-mon[54744]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T18:34:40.033 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:39 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[57581]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[57581]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[51427]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[51427]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T18:34:40.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:39 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T18:34:40.358 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:40 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:40.032+0000 7f49b8206640 -1 osd.7 309 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: pgmap v417: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: osd.7 marked itself dead as of e309 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: osd.2 marked itself dead as of e309 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[57581]: osd.1 marked itself dead as of e309 2026-03-09T18:34:40.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:34:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:34:40.747+0000 7f12bae41640 -1 osd.2 309 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:40.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:34:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:34:40.757+0000 7f12ada1d640 -1 osd.2 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:40.967 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:34:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:34:40.661+0000 7f43bff4f640 -1 osd.1 309 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:40.967 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:34:40 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:34:40.777+0000 7f43b332c640 -1 osd.1 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: pgmap v417: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: osd.7 marked itself dead as of e309 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: osd.2 marked itself dead as of e309 2026-03-09T18:34:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:40 vm04 ceph-mon[51427]: osd.1 marked itself dead as of e309 2026-03-09T18:34:41.108 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:40 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:40.770+0000 7f49ab5e3640 -1 osd.7 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T18:34:41.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: pgmap v417: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: osd.7 marked itself dead as of e309 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: osd.2 marked itself dead as of e309 2026-03-09T18:34:41.109 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:40 vm09 ceph-mon[54744]: osd.1 marked itself dead as of e309 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:41 vm09 ceph-mon[54744]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[57581]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: map e309 wrongly marked me down at e309 2026-03-09T18:34:42.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:41 vm04 ceph-mon[51427]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T18:34:43.108 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:34:42 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:34:42.826+0000 7f49b381d640 -1 osd.7 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:42 vm09 ceph-mon[54744]: pgmap v420: 196 pgs: 9 undersized+peered, 13 active+undersized, 42 stale+active+clean, 27 unknown, 4 undersized+degraded+peered, 11 active+undersized+degraded, 90 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 85/597 objects degraded (14.238%) 2026-03-09T18:34:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:42 vm09 ceph-mon[54744]: Health check failed: Degraded data redundancy: 85/597 objects degraded (14.238%), 15 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:42 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:42 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:42 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[57581]: pgmap v420: 196 pgs: 9 undersized+peered, 13 active+undersized, 42 stale+active+clean, 27 unknown, 4 undersized+degraded+peered, 11 active+undersized+degraded, 90 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 85/597 objects degraded (14.238%) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[57581]: Health check failed: Degraded data redundancy: 85/597 objects degraded (14.238%), 15 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:34:42 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:34:42.825+0000 7f12b5c57640 -1 osd.2 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:43.217 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:34:42 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:34:42.827+0000 7f43bb566640 -1 osd.1 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[51427]: pgmap v420: 196 pgs: 9 undersized+peered, 13 active+undersized, 42 stale+active+clean, 27 unknown, 4 undersized+degraded+peered, 11 active+undersized+degraded, 90 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 85/597 objects degraded (14.238%) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[51427]: Health check failed: Degraded data redundancy: 85/597 objects degraded (14.238%), 15 pgs degraded (PG_DEGRADED) 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:43.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:42 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[57581]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[57581]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[57581]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[51427]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[51427]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:44.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:43 vm04 ceph-mon[51427]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:44.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:43 vm09 ceph-mon[54744]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T18:34:44.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:43 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:44.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:43 vm09 ceph-mon[54744]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T18:34:44.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:43 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:44.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:43 vm09 ceph-mon[54744]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: pgmap v422: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[57581]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: pgmap v422: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:45.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:44 vm04 ceph-mon[51427]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: pgmap v422: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: osd.1 [v2:192.168.123.104:6810/3519470547,v1:192.168.123.104:6811/3519470547] boot 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: osd.2 [v2:192.168.123.104:6818/1080091581,v1:192.168.123.104:6819/1080091581] boot 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: osd.7 [v2:192.168.123.109:6824/3755915520,v1:192.168.123.109:6825/3755915520] boot 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:45.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:44 vm09 ceph-mon[54744]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T18:34:46.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[57581]: pgmap v425: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[51427]: pgmap v425: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:47.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:46 vm09 ceph-mon[54744]: pgmap v425: 196 pgs: 38 undersized+peered, 80 active+undersized, 1 stale+active+clean, 3 unknown, 11 undersized+degraded+peered, 33 active+undersized+degraded, 30 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 230/597 objects degraded (38.526%) 2026-03-09T18:34:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:46 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:34:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:47.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:47 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/677336016' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:47 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:48.718 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb_error PASSED [ 79%] 2026-03-09T18:34:48.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[57581]: pgmap v426: 196 pgs: 196 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[57581]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 230/597 objects degraded (38.526%), 44 pgs degraded) 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[57581]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[51427]: pgmap v426: 196 pgs: 196 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[51427]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 230/597 objects degraded (38.526%), 44 pgs degraded) 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:48 vm04 ceph-mon[51427]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T18:34:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:48 vm09 ceph-mon[54744]: pgmap v426: 196 pgs: 196 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-09T18:34:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:48 vm09 ceph-mon[54744]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 230/597 objects degraded (38.526%), 44 pgs degraded) 2026-03-09T18:34:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:48 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:48 vm09 ceph-mon[54744]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T18:34:50.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:49 vm09 ceph-mon[54744]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T18:34:50.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:49 vm04 ceph-mon[51427]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T18:34:50.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:49 vm04 ceph-mon[57581]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T18:34:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:50 vm09 ceph-mon[54744]: pgmap v429: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T18:34:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:50 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:50 vm09 ceph-mon[54744]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T18:34:51.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[57581]: pgmap v429: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T18:34:51.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:51.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[57581]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T18:34:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[51427]: pgmap v429: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-09T18:34:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:50 vm04 ceph-mon[51427]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T18:34:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:51 vm09 ceph-mon[54744]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T18:34:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:51 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3471992955' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:51 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[57581]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T18:34:52.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3471992955' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[51427]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T18:34:52.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3471992955' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:51 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:52.764 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lock PASSED [ 80%] 2026-03-09T18:34:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:52 vm09 ceph-mon[54744]: pgmap v432: 196 pgs: 23 unknown, 173 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:52 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:52 vm09 ceph-mon[54744]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T18:34:53.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[57581]: pgmap v432: 196 pgs: 23 unknown, 173 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:53.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:53.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[57581]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T18:34:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[51427]: pgmap v432: 196 pgs: 23 unknown, 173 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:34:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:52 vm04 ceph-mon[51427]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T18:34:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:53 vm09 ceph-mon[54744]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T18:34:54.118 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:53 vm04 ceph-mon[57581]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T18:34:54.118 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:53 vm04 ceph-mon[51427]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T18:34:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:54 vm09 ceph-mon[54744]: pgmap v435: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:55.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:54 vm09 ceph-mon[54744]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T18:34:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:54 vm04 ceph-mon[57581]: pgmap v435: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:54 vm04 ceph-mon[57581]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T18:34:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:54 vm04 ceph-mon[51427]: pgmap v435: 164 pgs: 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:54 vm04 ceph-mon[51427]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T18:34:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:55 vm09 ceph-mon[54744]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T18:34:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:55 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2787934999' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:55 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[57581]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T18:34:56.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2787934999' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[51427]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T18:34:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2787934999' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:55 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:34:56.850 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute PASSED [ 81%] 2026-03-09T18:34:57.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:34:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:34:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:56 vm09 ceph-mon[54744]: pgmap v438: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:56 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:56 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:56 vm09 ceph-mon[54744]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T18:34:57.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[57581]: pgmap v438: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:57.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[57581]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[51427]: pgmap v438: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 452 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:34:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:56 vm04 ceph-mon[51427]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T18:34:58.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:58.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:57 vm04 ceph-mon[57581]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T18:34:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:58.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:57 vm04 ceph-mon[51427]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T18:34:58.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:58.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:57 vm09 ceph-mon[54744]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T18:34:59.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:58 vm04 ceph-mon[57581]: pgmap v441: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:59.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:58 vm04 ceph-mon[57581]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T18:34:59.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:34:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:34:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:34:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:58 vm04 ceph-mon[51427]: pgmap v441: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:59.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:58 vm04 ceph-mon[51427]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T18:34:59.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:58 vm09 ceph-mon[54744]: pgmap v441: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:59.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:58 vm09 ceph-mon[54744]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T18:35:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[57581]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T18:35:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:00.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:00.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[57581]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T18:35:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[51427]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T18:35:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:00.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:34:59 vm04 ceph-mon[51427]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T18:35:00.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:59 vm09 ceph-mon[54744]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T18:35:00.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:59 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:00.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:59 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3665363603' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:00.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:34:59 vm09 ceph-mon[54744]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T18:35:00.884 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_execute PASSED [ 82%] 2026-03-09T18:35:01.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[57581]: pgmap v444: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[57581]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[51427]: pgmap v444: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:01.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:00 vm04 ceph-mon[51427]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T18:35:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:00 vm09 ceph-mon[54744]: pgmap v444: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:00 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:00 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:01.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:00 vm09 ceph-mon[54744]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T18:35:03.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[57581]: pgmap v447: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[57581]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T18:35:03.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[51427]: pgmap v447: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[51427]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T18:35:03.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:02 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:02 vm09 ceph-mon[54744]: pgmap v447: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:02 vm09 ceph-mon[54744]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T18:35:03.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:02 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:04.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[57581]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T18:35:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2978700860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[51427]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T18:35:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2978700860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:03 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:03 vm09 ceph-mon[54744]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T18:35:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:03 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2978700860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:03 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:04.926 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_setxattr PASSED [ 83%] 2026-03-09T18:35:05.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[57581]: pgmap v450: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:05.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[57581]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T18:35:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[51427]: pgmap v450: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:05.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:04 vm04 ceph-mon[51427]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T18:35:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:04 vm09 ceph-mon[54744]: pgmap v450: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:04 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:05.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:04 vm09 ceph-mon[54744]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T18:35:06.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:05 vm04 ceph-mon[57581]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T18:35:06.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:05 vm04 ceph-mon[51427]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T18:35:06.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:05 vm09 ceph-mon[54744]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T18:35:07.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:06 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:06 vm09 ceph-mon[54744]: pgmap v453: 164 pgs: 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:06 vm09 ceph-mon[54744]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T18:35:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:06 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:35:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:06 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:07.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:06 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:07.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[57581]: pgmap v453: 164 pgs: 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[57581]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[51427]: pgmap v453: 164 pgs: 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[51427]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:07.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:06 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T18:35:08.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T18:35:08.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:08 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T18:35:08.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:08 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:35:09.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: pgmap v456: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:08 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: pgmap v456: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:09.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:09 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: pgmap v456: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:09.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:09 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:10.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[57581]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[57581]: pgmap v459: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[51427]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:10 vm04 ceph-mon[51427]: pgmap v459: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:10.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:10.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:10 vm09 ceph-mon[54744]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T18:35:10.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:10 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:10 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T18:35:10.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:10 vm09 ceph-mon[54744]: pgmap v459: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:11.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[57581]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[51427]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:11.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:11 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:11.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:11 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T18:35:11.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:11 vm09 ceph-mon[54744]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T18:35:11.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:11 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:11.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:11 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T18:35:12.466 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[57581]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[57581]: pgmap v462: 196 pgs: 23 creating+peering, 173 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[51427]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.467 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:12 vm04 ceph-mon[51427]: pgmap v462: 196 pgs: 23 creating+peering, 173 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:12 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T18:35:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:12 vm09 ceph-mon[54744]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T18:35:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:12 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:12 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T18:35:12.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:12 vm09 ceph-mon[54744]: pgmap v462: 196 pgs: 23 creating+peering, 173 active+clean; 455 KiB data, 466 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:13 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T18:35:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:13 vm09 ceph-mon[54744]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T18:35:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:13 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:13.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:13 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T18:35:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[57581]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T18:35:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:13.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T18:35:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[51427]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T18:35:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/188267020' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:13.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:13 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:14.235 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_applications PASSED [ 84%] 2026-03-09T18:35:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:14 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:14 vm09 ceph-mon[54744]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T18:35:14.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:14 vm09 ceph-mon[54744]: pgmap v465: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:14.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:14.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[57581]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T18:35:14.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[57581]: pgmap v465: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[51427]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T18:35:14.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:14 vm04 ceph-mon[51427]: pgmap v465: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:15.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:15 vm09 ceph-mon[54744]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T18:35:15.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:15 vm04 ceph-mon[57581]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T18:35:15.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:15 vm04 ceph-mon[51427]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T18:35:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:16 vm09 ceph-mon[54744]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T18:35:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:16 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:16 vm09 ceph-mon[54744]: pgmap v468: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:16.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:16 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[57581]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[57581]: pgmap v468: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[51427]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[51427]: pgmap v468: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:16.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:16 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:17.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:16 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:17.272 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_service_daemon PASSED [ 85%] 2026-03-09T18:35:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:17 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:17 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:17 vm09 ceph-mon[54744]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T18:35:17.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:17 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[57581]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T18:35:17.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/253746002' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[51427]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T18:35:17.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:17 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:18 vm09 ceph-mon[54744]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T18:35:18.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:18 vm09 ceph-mon[54744]: pgmap v471: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:18 vm04 ceph-mon[57581]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T18:35:18.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:18 vm04 ceph-mon[57581]: pgmap v471: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:18.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:18 vm04 ceph-mon[51427]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T18:35:18.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:18 vm04 ceph-mon[51427]: pgmap v471: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:19.216 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:18 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:19 vm09 ceph-mon[54744]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T18:35:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:19 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/4108064932' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:19.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:19 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:19.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[57581]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T18:35:19.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/4108064932' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:19.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[51427]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T18:35:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/4108064932' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:19.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:19 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:20.294 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_alignment PASSED [ 86%] 2026-03-09T18:35:20.607 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:20 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:20.607 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:20 vm09 ceph-mon[54744]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T18:35:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:20 vm09 ceph-mon[54744]: pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:20.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:20 vm09 ceph-mon[54744]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[57581]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[57581]: pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[57581]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[51427]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[51427]: pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:20.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:20 vm04 ceph-mon[51427]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T18:35:21.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:21 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:21.608 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:21 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:21.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:21 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:21.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:21 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:21.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:21 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:21.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:21 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T18:35:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T18:35:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T18:35:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: pgmap v477: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[57581]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: pgmap v477: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:22.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:22 vm04 ceph-mon[51427]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: pgmap v477: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:22.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:22 vm09 ceph-mon[54744]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T18:35:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[57581]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T18:35:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[57581]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T18:35:23.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[51427]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T18:35:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[51427]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T18:35:23.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:23 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:23 vm09 ceph-mon[54744]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:23 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T18:35:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:23 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/571599098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:23 vm09 ceph-mon[54744]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T18:35:23.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:23 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:24.617 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[57581]: pgmap v480: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:24.617 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:24.617 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[57581]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T18:35:24.618 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[51427]: pgmap v480: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:24.618 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:24.618 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:24 vm04 ceph-mon[51427]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T18:35:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:24 vm09 ceph-mon[54744]: pgmap v480: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:24 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:24.759 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:24 vm09 ceph-mon[54744]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T18:35:25.354 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctxEc::test_alignment PASSED [ 87%] 2026-03-09T18:35:26.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:26 vm04 ceph-mon[57581]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T18:35:26.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:26 vm04 ceph-mon[57581]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:26 vm04 ceph-mon[51427]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T18:35:26.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:26 vm04 ceph-mon[51427]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:26.760 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:26 vm09 ceph-mon[54744]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T18:35:26.760 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:26 vm09 ceph-mon[54744]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:27.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:27.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[57581]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/2074537935' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[51427]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/2074537935' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:27 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:27 vm09 ceph-mon[54744]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T18:35:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:27 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/2074537935' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:27 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:27.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:27 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:28.373 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_last_version PASSED [ 89%] 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[57581]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[57581]: pgmap v486: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[57581]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[51427]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[51427]: pgmap v486: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:28.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:28 vm04 ceph-mon[51427]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T18:35:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:28 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:28 vm09 ceph-mon[54744]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T18:35:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:28 vm09 ceph-mon[54744]: pgmap v486: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:28.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:28 vm09 ceph-mon[54744]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T18:35:29.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:28 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:29.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:29 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:29.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:29 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:29.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:29 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:30.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3697199078' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[57581]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3697199078' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:30.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:30 vm04 ceph-mon[51427]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3697199078' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:30.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:30 vm09 ceph-mon[54744]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T18:35:31.392 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_stats PASSED [ 90%] 2026-03-09T18:35:31.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:31 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:31.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:31 vm04 ceph-mon[57581]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T18:35:31.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:31 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:31.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:31 vm04 ceph-mon[51427]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T18:35:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:31 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:31.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:31 vm09 ceph-mon[54744]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T18:35:32.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:32 vm04 ceph-mon[57581]: pgmap v492: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:32.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:32 vm04 ceph-mon[57581]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T18:35:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:32 vm04 ceph-mon[51427]: pgmap v492: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:32.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:32 vm04 ceph-mon[51427]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T18:35:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:32 vm09 ceph-mon[54744]: pgmap v492: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:32.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:32 vm09 ceph-mon[54744]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T18:35:34.412 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_read PASSED [ 91%] 2026-03-09T18:35:34.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:34 vm04 ceph-mon[57581]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T18:35:34.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:34 vm04 ceph-mon[57581]: pgmap v495: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:34.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:34 vm04 ceph-mon[51427]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T18:35:34.716 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:34 vm04 ceph-mon[51427]: pgmap v495: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:34 vm09 ceph-mon[54744]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T18:35:34.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:34 vm09 ceph-mon[54744]: pgmap v495: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:35.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:35 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:35.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:35 vm04 ceph-mon[57581]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T18:35:35.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:35 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:35.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:35 vm04 ceph-mon[51427]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T18:35:35.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:35 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:35.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:35 vm09 ceph-mon[54744]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T18:35:36.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:36 vm04 ceph-mon[57581]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T18:35:36.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:36 vm04 ceph-mon[57581]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:36 vm04 ceph-mon[51427]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T18:35:36.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:36 vm04 ceph-mon[51427]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:36.771 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:36 vm09 ceph-mon[54744]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T18:35:36.771 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:36 vm09 ceph-mon[54744]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:37.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:37.451 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_seek PASSED [ 92%] 2026-03-09T18:35:37.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:37 vm04 ceph-mon[57581]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T18:35:37.717 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:37 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:37 vm04 ceph-mon[51427]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T18:35:37.717 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:37 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:37 vm09 ceph-mon[54744]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T18:35:37.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:37 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:38 vm09 ceph-mon[54744]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T18:35:38.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:38 vm09 ceph-mon[54744]: pgmap v501: 164 pgs: 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:38.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:38 vm04 ceph-mon[57581]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T18:35:38.903 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:38 vm04 ceph-mon[57581]: pgmap v501: 164 pgs: 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:38.903 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:38 vm04 ceph-mon[51427]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T18:35:38.904 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:38 vm04 ceph-mon[51427]: pgmap v501: 164 pgs: 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:39.217 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:38 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:39.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:39 vm09 ceph-mon[54744]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T18:35:39.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:39 vm04 ceph-mon[57581]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T18:35:39.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:39 vm04 ceph-mon[51427]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T18:35:40.571 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_write PASSED [ 93%] 2026-03-09T18:35:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:40 vm09 ceph-mon[54744]: pgmap v503: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:40 vm09 ceph-mon[54744]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:40.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:40 vm09 ceph-mon[54744]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T18:35:40.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[57581]: pgmap v503: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:40.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[57581]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:40.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[57581]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T18:35:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[51427]: pgmap v503: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 490 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[51427]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:40.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:40 vm04 ceph-mon[51427]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T18:35:41.858 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:41 vm09 ceph-mon[54744]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T18:35:41.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:41 vm04 ceph-mon[57581]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T18:35:41.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:41 vm04 ceph-mon[51427]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T18:35:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:42 vm04 ceph-mon[57581]: pgmap v506: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:42.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:42 vm04 ceph-mon[57581]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T18:35:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:42 vm04 ceph-mon[51427]: pgmap v506: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:42.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:42 vm04 ceph-mon[51427]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T18:35:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:42 vm09 ceph-mon[54744]: pgmap v506: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:43.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:42 vm09 ceph-mon[54744]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T18:35:43.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:43 vm04 ceph-mon[57581]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T18:35:43.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:43 vm04 ceph-mon[51427]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T18:35:44.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:43 vm09 ceph-mon[54744]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: pgmap v509: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[57581]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: pgmap v509: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:44.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:44 vm04 ceph-mon[51427]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: pgmap v509: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:45.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:44 vm09 ceph-mon[54744]: from='mgr.14637 ' entity='mgr.y' 2026-03-09T18:35:45.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:45 vm04 ceph-mon[57581]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T18:35:45.966 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:45 vm04 ceph-mon[51427]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T18:35:46.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:45 vm09 ceph-mon[54744]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T18:35:46.966 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[57581]: pgmap v512: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[57581]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T18:35:46.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[57581]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[51427]: pgmap v512: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[51427]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T18:35:46.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:46 vm04 ceph-mon[51427]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:47.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:46 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:46 vm09 ceph-mon[54744]: pgmap v512: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:46 vm09 ceph-mon[54744]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T18:35:47.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:46 vm09 ceph-mon[54744]: from='mgr.14637 192.168.123.104:0/2753032613' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:47.804 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoCtxSelfManagedSnaps::test PASSED [ 94%] 2026-03-09T18:35:47.818 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_monmap_dump PASSED [ 95%] 2026-03-09T18:35:47.826 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_osd_bench PASSED [ 96%] 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[57581]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/522126704' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[51427]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/522126704' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:47.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:47 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:47 vm09 ceph-mon[54744]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T18:35:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:47 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/522126704' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:47 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T18:35:48.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:47 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:48.804 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_ceph_osd_pool_create_utf8 PASSED [ 97%] 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: pgmap v515: 196 pgs: 196 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? 192.168.123.104:0/1169707499' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:48 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: pgmap v515: 196 pgs: 196 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? 192.168.123.104:0/1169707499' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:48.967 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:48 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: pgmap v515: 196 pgs: 196 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/3010003140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? 192.168.123.104:0/1169707499' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:49.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:48 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T18:35:50.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:49 vm09 ceph-mon[54744]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T18:35:50.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:49 vm09 ceph-mon[54744]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T18:35:50.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:49 vm04 ceph-mon[57581]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T18:35:50.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:49 vm04 ceph-mon[57581]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T18:35:50.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:49 vm04 ceph-mon[51427]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T18:35:50.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:49 vm04 ceph-mon[51427]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T18:35:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:50 vm09 ceph-mon[54744]: pgmap v519: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:50 vm09 ceph-mon[54744]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T18:35:51.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:50 vm09 ceph-mon[54744]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T18:35:51.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[57581]: pgmap v519: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:51.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[57581]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T18:35:51.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[57581]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T18:35:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[51427]: pgmap v519: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[51427]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T18:35:51.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:50 vm04 ceph-mon[51427]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T18:35:52.845 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test PASSED [ 98%] 2026-03-09T18:35:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:52 vm09 ceph-mon[54744]: pgmap v522: 212 pgs: 19 unknown, 193 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 0 op/s 2026-03-09T18:35:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:52 vm09 ceph-mon[54744]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T18:35:53.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:52 vm09 ceph-mon[54744]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:53.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[57581]: pgmap v522: 212 pgs: 19 unknown, 193 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 0 op/s 2026-03-09T18:35:53.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[57581]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T18:35:53.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[57581]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[51427]: pgmap v522: 212 pgs: 19 unknown, 193 active+clean; 455 KiB data, 492 MiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 0 op/s 2026-03-09T18:35:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[51427]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T18:35:53.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:52 vm04 ceph-mon[51427]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T18:35:54.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:53 vm09 ceph-mon[54744]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T18:35:54.117 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:53 vm04 ceph-mon[57581]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T18:35:54.117 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:53 vm04 ceph-mon[51427]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T18:35:55.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:54 vm04 ceph-mon[57581]: pgmap v525: 180 pgs: 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:55.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:54 vm04 ceph-mon[57581]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T18:35:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:54 vm04 ceph-mon[51427]: pgmap v525: 180 pgs: 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:55.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:54 vm04 ceph-mon[51427]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T18:35:55.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:54 vm09 ceph-mon[54744]: pgmap v525: 180 pgs: 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:55.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:54 vm09 ceph-mon[54744]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T18:35:56.216 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:55 vm04 ceph-mon[57581]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T18:35:56.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:55 vm04 ceph-mon[57581]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T18:35:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:55 vm04 ceph-mon[51427]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T18:35:56.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:55 vm04 ceph-mon[51427]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T18:35:56.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:55 vm09 ceph-mon[54744]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T18:35:56.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:55 vm09 ceph-mon[54744]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test_aio_notify PASSED [100%] 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:=============================== warnings summary =============================== 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:210 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:210: DeprecationWarning: invalid escape sequence \- 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: assert re.match('[0-9a-f\-]{36}', fsid, re.I) 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:960 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:960: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: @pytest.mark.wait 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:996 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:996: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: @pytest.mark.wait 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:1024 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:1024: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: @pytest.mark.wait 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.905 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout::210 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout: :210: DeprecationWarning: invalid escape sequence \- 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout: 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout:-- Docs: https://docs.pytest.org/en/stable/warnings.html 2026-03-09T18:35:56.906 INFO:tasks.workunit.client.0.vm04.stdout:================= 91 passed, 13 warnings in 331.84s (0:05:31) ================== 2026-03-09T18:35:56.925 INFO:tasks.workunit.client.0.vm04.stderr:+ exit 0 2026-03-09T18:35:56.926 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T18:35:56.926 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T18:35:56.992 INFO:tasks.workunit:Stopping ['rados/test_python.sh'] on client.0... 2026-03-09T18:35:56.992 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T18:35:57.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:35:56 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug there is no tcmu-runner data available 2026-03-09T18:35:57.108 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:56 vm09 ceph-mon[54744]: pgmap v528: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:57.217 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:56 vm04 ceph-mon[57581]: pgmap v528: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:57.217 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:56 vm04 ceph-mon[51427]: pgmap v528: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 493 MiB used, 159 GiB / 160 GiB avail 2026-03-09T18:35:57.384 DEBUG:teuthology.parallel:result is None 2026-03-09T18:35:57.384 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:35:57.419 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:35:57.419 DEBUG:teuthology.orchestra.run.vm04:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T18:35:57.475 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T18:35:57.476 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T18:35:57.478 INFO:tasks.cephadm:Teardown begin 2026-03-09T18:35:57.478 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:57.538 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:57.563 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T18:35:57.563 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5769e1c8-1be5-11f1-a591-591820987f3e -- ceph mgr module disable cephadm 2026-03-09T18:35:57.734 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/mon.a/config 2026-03-09T18:35:57.751 INFO:teuthology.orchestra.run.vm04.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T18:35:57.768 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T18:35:57.769 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T18:35:57.769 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:35:57.782 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:35:57.797 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T18:35:57.797 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T18:35:57.797 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a 2026-03-09T18:35:58.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:57 vm04 ceph-mon[57581]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:57 vm04 ceph-mon[57581]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 systemd[1]: Stopping Ceph mon.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 ceph-mon[51427]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 ceph-mon[51427]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51423]: 2026-03-09T18:35:57.954+0000 7fb2abae5640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a[51423]: 2026-03-09T18:35:57.954+0000 7fb2abae5640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T18:35:58.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 09 18:35:57 vm04 podman[89649]: 2026-03-09 18:35:57.997725822 +0000 UTC m=+0.055624304 container died 5a16b990a68cc0d763d75470910f85997a680d7c9892e3e2c73e5137df05e897 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-a, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-09T18:35:58.078 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.a.service' 2026-03-09T18:35:58.108 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:35:58.108 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T18:35:58.108 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T18:35:58.108 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.c 2026-03-09T18:35:58.320 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:58 vm04 systemd[1]: Stopping Ceph mon.c for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:35:58.320 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-c[57577]: 2026-03-09T18:35:58.235+0000 7feeb0959640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:35:58.320 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 09 18:35:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-c[57577]: 2026-03-09T18:35:58.235+0000 7feeb0959640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T18:35:58.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:57 vm09 ceph-mon[54744]: from='client.14535 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:58.358 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:57 vm09 ceph-mon[54744]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T18:35:58.389 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.c.service' 2026-03-09T18:35:58.417 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:35:58.417 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T18:35:58.417 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T18:35:58.417 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.b 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 systemd[1]: Stopping Ceph mon.b for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-b[54740]: 2026-03-09T18:35:58.521+0000 7f9ebd6e0640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-b[54740]: 2026-03-09T18:35:58.521+0000 7f9ebd6e0640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 podman[82584]: 2026-03-09 18:35:58.666027652 +0000 UTC m=+0.159200249 container died e43d39d4682de5b7fad48d8c3158253895cd0ea877b7486724fa85472c43da33 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-b, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 podman[82584]: 2026-03-09 18:35:58.681052893 +0000 UTC m=+0.174225500 container remove e43d39d4682de5b7fad48d8c3158253895cd0ea877b7486724fa85472c43da33 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-b, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T18:35:58.728 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:35:58 vm09 bash[82584]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mon-b 2026-03-09T18:35:58.737 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mon.b.service' 2026-03-09T18:35:58.767 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:35:58.767 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T18:35:58.767 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T18:35:58.767 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y 2026-03-09T18:35:59.034 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service' 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:58 vm04 systemd[1]: Stopping Ceph mgr.y for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:58 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y[51636]: ::ffff:192.168.123.109 - - [09/Mar/2026:18:35:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:58 vm04 podman[89860]: 2026-03-09 18:35:58.959272325 +0000 UTC m=+0.107275981 container died 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:58 vm04 podman[89860]: 2026-03-09 18:35:58.98069737 +0000 UTC m=+0.128701026 container remove 7573eb34f6f45514dd45a5a7b29fe9174e4b0928f92ec4426185da6d2309e559 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.41.3) 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:58 vm04 bash[89860]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-mgr-y 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:59 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service: Deactivated successfully. 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:59 vm04 systemd[1]: Stopped Ceph mgr.y for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:35:59.057 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 09 18:35:59 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.y.service: Consumed 24.575s CPU time. 2026-03-09T18:35:59.065 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:35:59.065 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T18:35:59.065 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T18:35:59.065 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.x 2026-03-09T18:35:59.272 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@mgr.x.service' 2026-03-09T18:35:59.301 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:35:59.301 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T18:35:59.301 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T18:35:59.301 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.0 2026-03-09T18:35:59.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:35:59 vm04 systemd[1]: Stopping Ceph osd.0 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:35:59.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:35:59 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:35:59.395+0000 7f61b6683640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:35:59.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:35:59 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:35:59.395+0000 7f61b6683640 -1 osd.0 383 *** Got signal Terminated *** 2026-03-09T18:35:59.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:35:59 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0[60983]: 2026-03-09T18:35:59.395+0000 7f61b6683640 -1 osd.0 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[89962]: 2026-03-09 18:36:04.425109448 +0000 UTC m=+5.041357213 container died 57804f855fa28bfcc18a2a95bd90f3742e0235d48a593c93687e87dfb78fdb7d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[89962]: 2026-03-09 18:36:04.458716537 +0000 UTC m=+5.074964312 container remove 57804f855fa28bfcc18a2a95bd90f3742e0235d48a593c93687e87dfb78fdb7d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2) 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 bash[89962]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[90030]: 2026-03-09 18:36:04.628375121 +0000 UTC m=+0.016266511 container create eccbf7fef80d25ac8e638327d8013619d80a342ea78a1297380e469563dd0701 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0-deactivate, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[90030]: 2026-03-09 18:36:04.663205829 +0000 UTC m=+0.051097220 container init eccbf7fef80d25ac8e638327d8013619d80a342ea78a1297380e469563dd0701 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0-deactivate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, io.buildah.version=1.41.3) 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[90030]: 2026-03-09 18:36:04.665712412 +0000 UTC m=+0.053603802 container start eccbf7fef80d25ac8e638327d8013619d80a342ea78a1297380e469563dd0701 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:36:04.717 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 09 18:36:04 vm04 podman[90030]: 2026-03-09 18:36:04.666471572 +0000 UTC m=+0.054362952 container attach eccbf7fef80d25ac8e638327d8013619d80a342ea78a1297380e469563dd0701 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-0-deactivate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:36:04.820 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.0.service' 2026-03-09T18:36:04.850 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:04.850 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T18:36:04.850 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T18:36:04.850 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.1 2026-03-09T18:36:04.990 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:04 vm04 systemd[1]: Stopping Ceph osd.1 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:05.467 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:04 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:36:04.989+0000 7f43c0750640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:05.467 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:04 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:36:04.989+0000 7f43c0750640 -1 osd.1 383 *** Got signal Terminated *** 2026-03-09T18:36:05.467 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:04 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1[65871]: 2026-03-09T18:36:04.989+0000 7f43c0750640 -1 osd.1 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90123]: 2026-03-09 18:36:10.02993815 +0000 UTC m=+5.052526271 container died 1203d40de82a8ef4f55a7f9f8a5a35e4a140056bf0edffe0cd24541bf46bcb6c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90123]: 2026-03-09 18:36:10.053394437 +0000 UTC m=+5.075982548 container remove 1203d40de82a8ef4f55a7f9f8a5a35e4a140056bf0edffe0cd24541bf46bcb6c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 bash[90123]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90202]: 2026-03-09 18:36:10.198005964 +0000 UTC m=+0.015968151 container create 84e5892df99b0998706e9f1e5001ba38cdc7248384cc7c8bdfc5ce1b8ebc78bd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1-deactivate, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90202]: 2026-03-09 18:36:10.23937858 +0000 UTC m=+0.057340767 container init 84e5892df99b0998706e9f1e5001ba38cdc7248384cc7c8bdfc5ce1b8ebc78bd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1-deactivate, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90202]: 2026-03-09 18:36:10.242929968 +0000 UTC m=+0.060892155 container start 84e5892df99b0998706e9f1e5001ba38cdc7248384cc7c8bdfc5ce1b8ebc78bd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-09T18:36:10.290 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 18:36:10 vm04 podman[90202]: 2026-03-09 18:36:10.243958222 +0000 UTC m=+0.061920409 container attach 84e5892df99b0998706e9f1e5001ba38cdc7248384cc7c8bdfc5ce1b8ebc78bd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-1-deactivate, io.buildah.version=1.41.3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:36:10.418 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.1.service' 2026-03-09T18:36:10.450 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:10.450 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T18:36:10.450 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T18:36:10.450 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.2 2026-03-09T18:36:10.586 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:10 vm04 systemd[1]: Stopping Ceph osd.2 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:10.966 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:10 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:36:10.585+0000 7f12bb642640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:10.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:10 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:36:10.585+0000 7f12bb642640 -1 osd.2 383 *** Got signal Terminated *** 2026-03-09T18:36:10.967 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:10 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2[71119]: 2026-03-09T18:36:10.585+0000 7f12bb642640 -1 osd.2 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90300]: 2026-03-09 18:36:15.615836617 +0000 UTC m=+5.043026539 container died baa8cf8aa7768941c0c0b552728a0b58f68d911a2c46f842ea2d7b2618ea88c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90300]: 2026-03-09 18:36:15.643865984 +0000 UTC m=+5.071055917 container remove baa8cf8aa7768941c0c0b552728a0b58f68d911a2c46f842ea2d7b2618ea88c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 bash[90300]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90367]: 2026-03-09 18:36:15.771896644 +0000 UTC m=+0.018509649 container create 40f4ea343433ca9275dd1715272a66681040be0ed4797c0a13d5e002836f9cf0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2-deactivate, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90367]: 2026-03-09 18:36:15.808511853 +0000 UTC m=+0.055124858 container init 40f4ea343433ca9275dd1715272a66681040be0ed4797c0a13d5e002836f9cf0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90367]: 2026-03-09 18:36:15.814639413 +0000 UTC m=+0.061252418 container start 40f4ea343433ca9275dd1715272a66681040be0ed4797c0a13d5e002836f9cf0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90367]: 2026-03-09 18:36:15.815561931 +0000 UTC m=+0.062174936 container attach 40f4ea343433ca9275dd1715272a66681040be0ed4797c0a13d5e002836f9cf0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-2-deactivate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2) 2026-03-09T18:36:15.944 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 18:36:15 vm04 podman[90367]: 2026-03-09 18:36:15.76224653 +0000 UTC m=+0.008859535 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:36:15.992 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.2.service' 2026-03-09T18:36:16.023 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:16.024 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T18:36:16.024 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T18:36:16.024 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.3 2026-03-09T18:36:16.217 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:16 vm04 systemd[1]: Stopping Ceph osd.3 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:16.217 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:16 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3[76183]: 2026-03-09T18:36:16.163+0000 7f7e06d3d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:16.217 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:16 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3[76183]: 2026-03-09T18:36:16.163+0000 7f7e06d3d640 -1 osd.3 383 *** Got signal Terminated *** 2026-03-09T18:36:16.217 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:16 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3[76183]: 2026-03-09T18:36:16.163+0000 7f7e06d3d640 -1 osd.3 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90463]: 2026-03-09 18:36:21.192417531 +0000 UTC m=+5.042168653 container died 223bd7cd978bd259fd600d96b3d6af2573596f11f6a94080435b10abe74bff6e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223) 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90463]: 2026-03-09 18:36:21.213725728 +0000 UTC m=+5.063476850 container remove 223bd7cd978bd259fd600d96b3d6af2573596f11f6a94080435b10abe74bff6e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 bash[90463]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90529]: 2026-03-09 18:36:21.35513978 +0000 UTC m=+0.016466596 container create b02c319e9db4dfbe903a23b3355313dcb72c1944458b9e1ae1597c30eb189abf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3-deactivate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90529]: 2026-03-09 18:36:21.391783983 +0000 UTC m=+0.053110799 container init b02c319e9db4dfbe903a23b3355313dcb72c1944458b9e1ae1597c30eb189abf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90529]: 2026-03-09 18:36:21.394405 +0000 UTC m=+0.055731816 container start b02c319e9db4dfbe903a23b3355313dcb72c1944458b9e1ae1597c30eb189abf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3-deactivate, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T18:36:21.447 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 18:36:21 vm04 podman[90529]: 2026-03-09 18:36:21.403311301 +0000 UTC m=+0.064638117 container attach b02c319e9db4dfbe903a23b3355313dcb72c1944458b9e1ae1597c30eb189abf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-3-deactivate, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:36:21.564 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.3.service' 2026-03-09T18:36:21.595 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:21.595 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T18:36:21.595 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T18:36:21.595 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.4 2026-03-09T18:36:22.108 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:21 vm09 systemd[1]: Stopping Ceph osd.4 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:22.108 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:21 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:36:21.699+0000 7fe10c526640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:22.108 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:21 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:36:21.699+0000 7fe10c526640 -1 osd.4 383 *** Got signal Terminated *** 2026-03-09T18:36:22.108 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:21 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4[58851]: 2026-03-09T18:36:21.699+0000 7fe10c526640 -1 osd.4 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:25.712 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:25 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:25.403+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:26.730 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:26 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:26.392+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:26.984 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82800]: 2026-03-09 18:36:26.729805264 +0000 UTC m=+5.043804025 container died 6e836a00834a0370e8b799c37b71dad6ece7402802d85a830ca7cf593fd7219b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82800]: 2026-03-09 18:36:26.752064474 +0000 UTC m=+5.066063235 container remove 6e836a00834a0370e8b799c37b71dad6ece7402802d85a830ca7cf593fd7219b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 bash[82800]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82882]: 2026-03-09 18:36:26.892390647 +0000 UTC m=+0.016105613 container create 72750ef2a8af29c8d57b8ea5195fef1e7784e9aef725abeecfa5fc3f8baf19b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4-deactivate, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82882]: 2026-03-09 18:36:26.929331475 +0000 UTC m=+0.053046431 container init 72750ef2a8af29c8d57b8ea5195fef1e7784e9aef725abeecfa5fc3f8baf19b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82882]: 2026-03-09 18:36:26.932547304 +0000 UTC m=+0.056262270 container start 72750ef2a8af29c8d57b8ea5195fef1e7784e9aef725abeecfa5fc3f8baf19b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4-deactivate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T18:36:26.985 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 18:36:26 vm09 podman[82882]: 2026-03-09 18:36:26.933417383 +0000 UTC m=+0.057132349 container attach 72750ef2a8af29c8d57b8ea5195fef1e7784e9aef725abeecfa5fc3f8baf19b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-4-deactivate, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T18:36:27.110 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.4.service' 2026-03-09T18:36:27.143 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:27.143 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T18:36:27.143 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T18:36:27.143 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.5 2026-03-09T18:36:27.285 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:27 vm09 systemd[1]: Stopping Ceph osd.5 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:27.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:27.404+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:27.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:27.283+0000 7fdda0b54640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:27.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:27.283+0000 7fdda0b54640 -1 osd.5 383 *** Got signal Terminated *** 2026-03-09T18:36:27.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:27 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:27.283+0000 7fdda0b54640 -1 osd.5 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:28.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:28 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:28.444+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:28.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:28 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:28.356+0000 7fdd9c96c640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.063376+0000 front 2026-03-09T18:36:03.063298+0000 (oldest deadline 2026-03-09T18:36:27.763199+0000) 2026-03-09T18:36:29.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:29 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:29.395+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:29.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:29 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:29.328+0000 7fdd9c96c640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.063376+0000 front 2026-03-09T18:36:03.063298+0000 (oldest deadline 2026-03-09T18:36:27.763199+0000) 2026-03-09T18:36:29.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:29 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:29.314+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:30.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:30.438+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:30.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:30.293+0000 7fdd9c96c640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.063376+0000 front 2026-03-09T18:36:03.063298+0000 (oldest deadline 2026-03-09T18:36:27.763199+0000) 2026-03-09T18:36:30.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:30 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:30.288+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:31.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:31.431+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:31.608 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:31.251+0000 7fdd9c96c640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.063376+0000 front 2026-03-09T18:36:03.063298+0000 (oldest deadline 2026-03-09T18:36:27.763199+0000) 2026-03-09T18:36:31.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:31 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:31.291+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:32.421+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5[63689]: 2026-03-09T18:36:32.261+0000 7fdd9c96c640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.063376+0000 front 2026-03-09T18:36:03.063298+0000 (oldest deadline 2026-03-09T18:36:27.763199+0000) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 podman[82977]: 2026-03-09 18:36:32.326915201 +0000 UTC m=+5.054979270 container died 1c0de687ebf599662ff81fd14f70f144586c80769479c23ba217b81b8021df18 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 podman[82977]: 2026-03-09 18:36:32.341722955 +0000 UTC m=+5.069787015 container remove 1c0de687ebf599662ff81fd14f70f144586c80769479c23ba217b81b8021df18 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 bash[82977]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 podman[83044]: 2026-03-09 18:36:32.492963776 +0000 UTC m=+0.020713027 container create 23eab18a079e840ea6130742614ad2530d445c6601ba0b3e26ecaec78e75ba4d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5-deactivate, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-09T18:36:32.513 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:32.304+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:32.536 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 podman[83044]: 2026-03-09 18:36:32.531149274 +0000 UTC m=+0.058898525 container init 23eab18a079e840ea6130742614ad2530d445c6601ba0b3e26ecaec78e75ba4d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5-deactivate, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T18:36:32.536 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 18:36:32 vm09 podman[83044]: 2026-03-09 18:36:32.535440486 +0000 UTC m=+0.063189726 container start 23eab18a079e840ea6130742614ad2530d445c6601ba0b3e26ecaec78e75ba4d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-5-deactivate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T18:36:32.688 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.5.service' 2026-03-09T18:36:32.720 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:32.720 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T18:36:32.720 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T18:36:32.720 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.6 2026-03-09T18:36:33.108 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:32 vm09 systemd[1]: Stopping Ceph osd.6 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:33.108 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:32.868+0000 7fd847764640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:33.108 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:32.868+0000 7fd847764640 -1 osd.6 383 *** Got signal Terminated *** 2026-03-09T18:36:33.108 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:32 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:32.868+0000 7fd847764640 -1 osd.6 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:33.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:33 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:33.458+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:33.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:33 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:33.325+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:34.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:34 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:34.453+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:34.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:34 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:34.300+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:34.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:34 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:34.300+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:08.719698+0000 front 2026-03-09T18:36:08.719750+0000 (oldest deadline 2026-03-09T18:36:34.019366+0000) 2026-03-09T18:36:35.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:35.468+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:35.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:35.468+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:35.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:35.294+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:35.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:35 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:35.294+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:08.719698+0000 front 2026-03-09T18:36:08.719750+0000 (oldest deadline 2026-03-09T18:36:34.019366+0000) 2026-03-09T18:36:36.407 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:36.247+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:36.407 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:36.247+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:08.719698+0000 front 2026-03-09T18:36:08.719750+0000 (oldest deadline 2026-03-09T18:36:34.019366+0000) 2026-03-09T18:36:36.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:36.506+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:36.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:36 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:36.506+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:37.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:37 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:37.502+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:37.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:37 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:37.502+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:37.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:37 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:37.277+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:02.819545+0000 front 2026-03-09T18:36:02.819611+0000 (oldest deadline 2026-03-09T18:36:28.719095+0000) 2026-03-09T18:36:37.608 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:37 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6[68773]: 2026-03-09T18:36:37.277+0000 7fd84357c640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:08.719698+0000 front 2026-03-09T18:36:08.719750+0000 (oldest deadline 2026-03-09T18:36:34.019366+0000) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:37 vm09 podman[83141]: 2026-03-09 18:36:37.904424786 +0000 UTC m=+5.050688882 container died 496524e94c57c15d9151fa575bca1d9fde792350cfe8e91a71ef7f0a5079bcd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:37 vm09 podman[83141]: 2026-03-09 18:36:37.927291143 +0000 UTC m=+5.073555229 container remove 496524e94c57c15d9151fa575bca1d9fde792350cfe8e91a71ef7f0a5079bcd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:37 vm09 bash[83141]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:38 vm09 podman[83207]: 2026-03-09 18:36:38.060900937 +0000 UTC m=+0.015748386 container create 72bc12836594bf2bb8711972e6a20fb7172861868310b68868583d82e04e6a51 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:38 vm09 podman[83207]: 2026-03-09 18:36:38.106777969 +0000 UTC m=+0.061625408 container init 72bc12836594bf2bb8711972e6a20fb7172861868310b68868583d82e04e6a51 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6-deactivate, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:38 vm09 podman[83207]: 2026-03-09 18:36:38.109840712 +0000 UTC m=+0.064688161 container start 72bc12836594bf2bb8711972e6a20fb7172861868310b68868583d82e04e6a51 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:38 vm09 podman[83207]: 2026-03-09 18:36:38.11417314 +0000 UTC m=+0.069020589 container attach 72bc12836594bf2bb8711972e6a20fb7172861868310b68868583d82e04e6a51 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-6-deactivate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True) 2026-03-09T18:36:38.234 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 18:36:38 vm09 podman[83207]: 2026-03-09 18:36:38.054745936 +0000 UTC m=+0.009593385 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:36:38.262 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.6.service' 2026-03-09T18:36:38.294 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:38.294 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T18:36:38.294 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T18:36:38.294 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.7 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 systemd[1]: Stopping Ceph osd.7 for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.430+0000 7f49b8a07640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.430+0000 7f49b8a07640 -1 osd.7 383 *** Got signal Terminated *** 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.430+0000 7f49b8a07640 -1 osd.7 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.466+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.466+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:38.608 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:38 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:38.466+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:39.742 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:39.428+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:39.742 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:39.428+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:39.742 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:39.428+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.743Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.743Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.743Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.746Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.747Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.108 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 18:36:39 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-prometheus-a[81116]: ts=2026-03-09T18:36:39.747Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-09T18:36:40.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:40 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:40.447+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:40.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:40 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:40.447+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:40.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:40 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:40.447+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:41.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:41 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:41.397+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:41.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:41 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:41.397+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:41.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:41 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:41.397+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:42.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:42 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:42.423+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:42.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:42 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:42.423+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:42.858 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:42 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:42.423+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:43.373+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6806 osd.0 since back 2026-03-09T18:36:03.558905+0000 front 2026-03-09T18:36:03.558801+0000 (oldest deadline 2026-03-09T18:36:25.258332+0000) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:43.373+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6814 osd.1 since back 2026-03-09T18:36:09.359225+0000 front 2026-03-09T18:36:09.359462+0000 (oldest deadline 2026-03-09T18:36:34.659004+0000) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:43.373+0000 7f49b5020640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.104:6822 osd.2 since back 2026-03-09T18:36:14.659459+0000 front 2026-03-09T18:36:14.659481+0000 (oldest deadline 2026-03-09T18:36:38.159281+0000) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7[73631]: 2026-03-09T18:36:43.373+0000 7f49b5020640 -1 osd.7 383 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.24427.0:418 2.0 2:353ec8f9:::gateway.conf:head [getxattr epoch in=5b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e383) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 podman[83301]: 2026-03-09 18:36:43.458081439 +0000 UTC m=+5.039563641 container died c3c24196ae2df7f7882472d16be3272a2e039cc2f2f0afd3e354f7f9fdb431d8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 podman[83301]: 2026-03-09 18:36:43.484871768 +0000 UTC m=+5.066353970 container remove c3c24196ae2df7f7882472d16be3272a2e039cc2f2f0afd3e354f7f9fdb431d8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223) 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 bash[83301]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7 2026-03-09T18:36:43.651 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 18:36:43 vm09 podman[83366]: 2026-03-09 18:36:43.624482162 +0000 UTC m=+0.015159613 container create 80c6280d93f648c2690e780e70c8a877b36876a51513850a101ddc7bbcf907b5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-osd-7-deactivate, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T18:36:43.817 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@osd.7.service' 2026-03-09T18:36:43.848 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:43.849 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T18:36:43.849 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T18:36:43.849 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@rgw.foo.a 2026-03-09T18:36:44.217 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:36:43 vm04 systemd[1]: Stopping Ceph rgw.foo.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:44.217 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:36:43 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a[80463]: 2026-03-09T18:36:43.948+0000 7f2e0602a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T18:36:44.217 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 09 18:36:43 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-rgw-foo-a[80463]: 2026-03-09T18:36:43.948+0000 7f2e09899980 -1 shutting down 2026-03-09T18:36:54.056 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@rgw.foo.a.service' 2026-03-09T18:36:54.086 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:54.086 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T18:36:54.086 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T18:36:54.086 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-5769e1c8-1be5-11f1-a591-591820987f3e@prometheus.a 2026-03-09T18:36:54.275 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-5769e1c8-1be5-11f1-a591-591820987f3e@prometheus.a.service' 2026-03-09T18:36:54.304 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:36:54.304 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T18:36:54.305 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm rm-cluster --fsid 5769e1c8-1be5-11f1-a591-591820987f3e --force --keep-logs 2026-03-09T18:36:54.428 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:36:55.909 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 systemd[1]: Stopping Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:55.909 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a[86390]: ts=2026-03-09T18:36:55.858Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:36:55.909 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 podman[91151]: 2026-03-09 18:36:55.869888315 +0000 UTC m=+0.024192156 container died 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:36:55.909 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 podman[91151]: 2026-03-09 18:36:55.893113922 +0000 UTC m=+0.047417762 container remove 23f69edc71acce51a4567f406e0a8a6fa91eb66865b8d3602450dbdb2ff041e3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T18:36:55.910 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 podman[91151]: 2026-03-09 18:36:55.894177763 +0000 UTC m=+0.048481603 volume remove 236079768cac12b1a32a4a820dea2e48e9736454d8f6efd0085eaaf31cd2c9b7 2026-03-09T18:36:55.910 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 bash[91151]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-alertmanager-a 2026-03-09T18:36:56.214 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 systemd[1]: Stopping Ceph node-exporter.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:56.214 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 podman[91251]: 2026-03-09 18:36:56.197490993 +0000 UTC m=+0.026058128 container died 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T18:36:56.214 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:36:56.214 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 09 18:36:55 vm04 systemd[1]: Stopped Ceph alertmanager.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 podman[91261]: 2026-03-09 18:36:56.215207136 +0000 UTC m=+0.020722100 container remove 6b9a569049164cd610b9e39cb53dcd7c5e728202dca1d3d72406c9204f514761 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 bash[91251]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-a 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 systemd[1]: Stopped Ceph node-exporter.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:36:56.467 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 09 18:36:56 vm04 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.a.service: Consumed 1.170s CPU time. 2026-03-09T18:36:56.834 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid 5769e1c8-1be5-11f1-a591-591820987f3e --force --keep-logs 2026-03-09T18:36:56.953 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:36:58.608 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:36:58 vm09 systemd[1]: Stopping Ceph iscsi.iscsi.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:36:58.608 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:36:58 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a[78153]: debug Shutdown received 2026-03-09T18:37:08.348 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 bash[83891]: time="2026-03-09T18:37:08Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a in 10 seconds, resorting to SIGKILL" 2026-03-09T18:37:08.349 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 podman[83891]: 2026-03-09 18:37:08.273567754 +0000 UTC m=+10.034900966 container died 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-09T18:37:08.349 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 podman[83891]: 2026-03-09 18:37:08.295729542 +0000 UTC m=+10.057062754 container remove 4f52d2a052afaf53b436f3d6910aa8a6333e116ff78cb31dda522d2bfcdcdda3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS) 2026-03-09T18:37:08.349 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 bash[83891]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-iscsi-iscsi-a 2026-03-09T18:37:08.349 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T18:37:08.608 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T18:37:08.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 systemd[1]: Stopped Ceph iscsi.iscsi.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:37:08.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 18:37:08 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@iscsi.iscsi.a.service: Consumed 1.171s CPU time. 2026-03-09T18:37:09.327 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: Stopping Ceph node-exporter.b for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:08 vm09 systemd[1]: Stopping Ceph grafana.a for 5769e1c8-1be5-11f1-a591-591820987f3e... 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=server t=2026-03-09T18:37:09.030385143Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=ticker t=2026-03-09T18:37:09.030763572Z level=info msg=stopped last_tick=2026-03-09T18:37:00Z 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=tracing t=2026-03-09T18:37:09.030948638Z level=info msg="Closing tracing" 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=grafana-apiserver t=2026-03-09T18:37:09.031427484Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a[80218]: logger=sqlstore.transactions t=2026-03-09T18:37:09.042582582Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 podman[84136]: 2026-03-09 18:37:09.052142875 +0000 UTC m=+0.035126322 container died 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a, maintainer=Grafana Labs ) 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 podman[84136]: 2026-03-09 18:37:09.07608802 +0000 UTC m=+0.059071478 container remove 15fea638bb6a4566d412d3ad33bbaae7a5d24a14fdfe5e375a0c9830ed3ad630 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a, maintainer=Grafana Labs ) 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 bash[84136]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-grafana-a 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@grafana.a.service: Deactivated successfully. 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: Stopped Ceph grafana.a for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:37:09.327 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@grafana.a.service: Consumed 3.494s CPU time. 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 podman[84238]: 2026-03-09 18:37:09.398692532 +0000 UTC m=+0.016410204 container died 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 podman[84238]: 2026-03-09 18:37:09.410624164 +0000 UTC m=+0.028341826 container remove 78ab4c579a47eb616e17330b93a026d5b4fa438d9acb3fbcb7ca83cb7f77531e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 bash[84238]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e-node-exporter-b 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: Stopped Ceph node-exporter.b for 5769e1c8-1be5-11f1-a591-591820987f3e. 2026-03-09T18:37:09.600 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 18:37:09 vm09 systemd[1]: ceph-5769e1c8-1be5-11f1-a591-591820987f3e@node-exporter.b.service: Consumed 1.118s CPU time. 2026-03-09T18:37:10.049 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:37:10.075 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:37:10.101 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T18:37:10.102 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm04/crash 2026-03-09T18:37:10.102 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash -- . 2026-03-09T18:37:10.141 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash: Cannot open: No such file or directory 2026-03-09T18:37:10.141 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:37:10.142 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm09/crash 2026-03-09T18:37:10.142 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash -- . 2026-03-09T18:37:10.167 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/crash: Cannot open: No such file or directory 2026-03-09T18:37:10.167 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:37:10.168 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T18:37:10.168 DEBUG:teuthology.orchestra.run.vm04:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(PG_' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | head -n 1 2026-03-09T18:37:10.216 INFO:tasks.cephadm:Compressing logs... 2026-03-09T18:37:10.216 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:37:10.258 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:37:10.281 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:37:10.281 INFO:teuthology.orchestra.run.vm04.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:37:10.283 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.a.log 2026-03-09T18:37:10.283 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log 2026-03-09T18:37:10.284 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:37:10.284 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:37:10.284 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.a.log: 91.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:37:10.284 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.y.log 2026-03-09T18:37:10.285 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log 2026-03-09T18:37:10.285 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.b.log 2026-03-09T18:37:10.286 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log: 92.4% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log.gz 2026-03-09T18:37:10.287 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log 2026-03-09T18:37:10.290 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log 2026-03-09T18:37:10.291 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.b.log: 95.4% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log.gz 2026-03-09T18:37:10.293 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log 2026-03-09T18:37:10.293 INFO:teuthology.orchestra.run.vm09.stderr: 91.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:37:10.294 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log 2026-03-09T18:37:10.296 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log: 90.5% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log.gz 2026-03-09T18:37:10.296 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.x.log 2026-03-09T18:37:10.297 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log: 86.6% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.log.gz 2026-03-09T18:37:10.298 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log 2026-03-09T18:37:10.298 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log: 94.2% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.audit.log.gz 2026-03-09T18:37:10.298 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log 2026-03-09T18:37:10.300 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log: 88.2% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log.gz 2026-03-09T18:37:10.301 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.x.log: 91.0% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.x.log.gz 2026-03-09T18:37:10.302 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.4.log 2026-03-09T18:37:10.302 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log: 79.3% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph.cephadm.log.gz 2026-03-09T18:37:10.302 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.5.log 2026-03-09T18:37:10.304 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.c.log 2026-03-09T18:37:10.309 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.6.log 2026-03-09T18:37:10.311 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.0.log 2026-03-09T18:37:10.318 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.c.log: 95.4%gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.1.log 2026-03-09T18:37:10.318 INFO:teuthology.orchestra.run.vm04.stderr: -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-volume.log.gz 2026-03-09T18:37:10.320 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.7.log 2026-03-09T18:37:10.329 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.2.log 2026-03-09T18:37:10.330 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/tcmu-runner.log 2026-03-09T18:37:10.339 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.3.log 2026-03-09T18:37:10.342 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.7.log: /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/tcmu-runner.log: 63.1% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/tcmu-runner.log.gz 2026-03-09T18:37:10.350 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-client.rgw.foo.a.log 2026-03-09T18:37:10.362 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.3.log: /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-client.rgw.foo.a.log: 59.0% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-client.rgw.foo.a.log.gz 2026-03-09T18:37:10.544 INFO:teuthology.orchestra.run.vm04.stderr: 89.9% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mgr.y.log.gz 2026-03-09T18:37:10.692 INFO:teuthology.orchestra.run.vm04.stderr: 92.0% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.c.log.gz 2026-03-09T18:37:10.770 INFO:teuthology.orchestra.run.vm09.stderr: 91.9% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.b.log.gz 2026-03-09T18:37:11.273 INFO:teuthology.orchestra.run.vm04.stderr: 91.5% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-mon.a.log.gz 2026-03-09T18:37:12.658 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.6.log.gz 2026-03-09T18:37:12.682 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.5.log.gz 2026-03-09T18:37:12.689 INFO:teuthology.orchestra.run.vm04.stderr: 94.7% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.2.log.gz 2026-03-09T18:37:12.796 INFO:teuthology.orchestra.run.vm09.stderr: 94.7% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.7.log.gz 2026-03-09T18:37:12.804 INFO:teuthology.orchestra.run.vm09.stderr: 94.8% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.4.log.gz 2026-03-09T18:37:12.805 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-09T18:37:12.805 INFO:teuthology.orchestra.run.vm09.stderr:real 0m2.532s 2026-03-09T18:37:12.805 INFO:teuthology.orchestra.run.vm09.stderr:user 0m4.829s 2026-03-09T18:37:12.805 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.209s 2026-03-09T18:37:12.964 INFO:teuthology.orchestra.run.vm04.stderr: 94.8% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.0.log.gz 2026-03-09T18:37:12.998 INFO:teuthology.orchestra.run.vm04.stderr: 94.8% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.1.log.gz 2026-03-09T18:37:13.135 INFO:teuthology.orchestra.run.vm04.stderr: 94.9% -- replaced with /var/log/ceph/5769e1c8-1be5-11f1-a591-591820987f3e/ceph-osd.3.log.gz 2026-03-09T18:37:13.136 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T18:37:13.136 INFO:teuthology.orchestra.run.vm04.stderr:real 0m2.866s 2026-03-09T18:37:13.136 INFO:teuthology.orchestra.run.vm04.stderr:user 0m5.331s 2026-03-09T18:37:13.136 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.242s 2026-03-09T18:37:13.137 INFO:tasks.cephadm:Archiving logs... 2026-03-09T18:37:13.137 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm04/log 2026-03-09T18:37:13.137 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:37:13.455 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm09/log 2026-03-09T18:37:13.456 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:37:13.709 INFO:tasks.cephadm:Removing cluster... 2026-03-09T18:37:13.709 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm rm-cluster --fsid 5769e1c8-1be5-11f1-a591-591820987f3e --force 2026-03-09T18:37:13.836 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:37:14.067 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid 5769e1c8-1be5-11f1-a591-591820987f3e --force 2026-03-09T18:37:14.200 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 5769e1c8-1be5-11f1-a591-591820987f3e 2026-03-09T18:37:14.438 INFO:tasks.cephadm:Teardown complete 2026-03-09T18:37:14.438 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T18:37:14.440 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T18:37:14.440 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:37:14.442 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:37:14.480 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T18:37:14.480 DEBUG:teuthology.orchestra.run.vm04:> 2026-03-09T18:37:14.480 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T18:37:14.480 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y remove $d || true 2026-03-09T18:37:14.480 DEBUG:teuthology.orchestra.run.vm04:> done 2026-03-09T18:37:14.487 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T18:37:14.487 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T18:37:14.487 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T18:37:14.487 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-09T18:37:14.487 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-09T18:37:14.636 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for process with pid 91672 to finish. 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 39 M 2026-03-09T18:37:14.685 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:14.687 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:14.687 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:14.702 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:14.702 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:14.733 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T18:37:14.757 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:14.760 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:14.768 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:14.782 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:14.851 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:14.851 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:14.899 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:14.900 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:14.900 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:14.900 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T18:37:14.900 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:14.900 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:15.096 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:15.096 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 212 M 2026-03-09T18:37:15.097 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:15.100 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:15.100 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:15.123 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:15.124 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:15.185 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:15.192 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T18:37:15.194 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T18:37:15.197 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T18:37:15.212 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T18:37:15.286 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T18:37:15.286 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T18:37:15.286 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T18:37:15.286 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.334 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:15.538 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-09T18:37:15.539 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:15.542 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:15.542 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:15.565 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:15.565 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:15.606 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:15.611 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T18:37:15.615 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T18:37:15.617 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T18:37:15.619 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T18:37:15.621 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T18:37:15.623 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T18:37:15.643 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.644 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:15.652 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:15.671 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:15.672 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:15.672 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T18:37:15.672 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T18:37:15.672 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T18:37:15.672 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.673 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T18:37:15.761 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:15.809 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:16.013 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T18:37:16.019 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T18:37:16.020 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T18:37:16.021 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout:Remove 100 Packages 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 612 M 2026-03-09T18:37:16.022 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:16.049 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:16.049 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:16.154 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:16.154 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:16.298 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:16.298 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T18:37:16.305 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T18:37:16.326 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.327 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:16.340 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:16.365 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-09T18:37:16.365 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T18:37:16.420 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T18:37:16.430 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-09T18:37:16.435 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-09T18:37:16.435 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:16.446 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:16.453 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-09T18:37:16.457 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-09T18:37:16.465 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-09T18:37:16.470 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T18:37:16.491 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.496 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:16.504 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:16.521 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:16.521 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:16.521 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T18:37:16.521 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.529 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:16.538 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:16.540 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-09T18:37:16.545 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-09T18:37:16.549 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-09T18:37:16.557 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-09T18:37:16.570 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-09T18:37:16.576 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-09T18:37:16.586 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-09T18:37:16.592 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-09T18:37:16.621 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-09T18:37:16.628 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-09T18:37:16.631 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-09T18:37:16.640 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-09T18:37:16.652 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-09T18:37:16.652 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T18:37:16.660 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T18:37:16.755 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-09T18:37:16.771 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-09T18:37:16.784 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:16.785 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T18:37:16.785 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.786 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:16.810 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:16.825 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-09T18:37:16.831 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-09T18:37:16.833 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-09T18:37:16.836 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T18:37:16.857 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.858 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:16.869 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:16.873 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-09T18:37:16.875 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-09T18:37:16.878 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-09T18:37:16.881 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-09T18:37:16.884 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-09T18:37:16.888 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-09T18:37:16.892 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-09T18:37:16.939 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-09T18:37:16.950 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-09T18:37:16.953 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-09T18:37:16.959 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-09T18:37:16.962 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-09T18:37:16.965 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-09T18:37:16.968 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-09T18:37:16.994 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:16.994 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:16.994 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T18:37:16.994 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:16.994 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:17.000 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:17.002 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-09T18:37:17.004 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-09T18:37:17.006 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-09T18:37:17.009 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-09T18:37:17.011 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-09T18:37:17.014 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-09T18:37:17.017 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-09T18:37:17.025 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-09T18:37:17.030 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-09T18:37:17.032 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-09T18:37:17.035 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-09T18:37:17.038 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-09T18:37:17.043 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-09T18:37:17.048 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-09T18:37:17.053 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T18:37:17.058 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-09T18:37:17.064 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-09T18:37:17.068 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-09T18:37:17.070 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-09T18:37:17.076 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-09T18:37:17.080 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-09T18:37:17.083 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-09T18:37:17.092 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-09T18:37:17.097 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-09T18:37:17.100 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-09T18:37:17.104 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-09T18:37:17.105 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-09T18:37:17.111 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-09T18:37:17.115 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-09T18:37:17.134 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:17.134 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T18:37:17.134 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:17.141 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:17.170 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:17.170 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T18:37:17.182 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T18:37:17.187 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-09T18:37:17.190 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-09T18:37:17.192 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-09T18:37:17.192 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:19.700 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 39 M 2026-03-09T18:37:19.701 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:19.703 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:19.703 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:19.717 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:19.717 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:19.748 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:19.768 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:19.768 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:19.768 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T18:37:19.769 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T18:37:19.769 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T18:37:19.769 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:19.773 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:19.781 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:19.796 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:19.859 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:19.860 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:19.907 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:20.109 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Remove 4 Packages 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 212 M 2026-03-09T18:37:20.110 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:20.113 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:20.113 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:20.136 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:20.136 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:20.198 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:20.203 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T18:37:20.205 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T18:37:20.208 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T18:37:20.224 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T18:37:20.292 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T18:37:20.292 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T18:37:20.292 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T18:37:20.292 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.340 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T18:37:20.537 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout:Remove 8 Packages 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 28 M 2026-03-09T18:37:20.538 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:20.540 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:20.540 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:20.563 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:20.564 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:20.604 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:20.609 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T18:37:20.612 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T18:37:20.614 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T18:37:20.617 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T18:37:20.620 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T18:37:20.622 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T18:37:20.643 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.644 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:20.651 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T18:37:20.674 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.676 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T18:37:20.762 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T18:37:20.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T18:37:20.815 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.815 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:20.816 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:21.027 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T18:37:21.032 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T18:37:21.033 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout:Remove 100 Packages 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 612 M 2026-03-09T18:37:21.034 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:21.060 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:21.060 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:21.168 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:21.168 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:21.313 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:21.313 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T18:37:21.320 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T18:37:21.339 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.340 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:21.353 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:21.377 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-09T18:37:21.377 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T18:37:21.432 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T18:37:21.441 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-09T18:37:21.445 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-09T18:37:21.445 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:21.456 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:21.463 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-09T18:37:21.467 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-09T18:37:21.475 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-09T18:37:21.480 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T18:37:21.501 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.506 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:21.515 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:21.531 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:21.531 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:21.531 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T18:37:21.531 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.540 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:21.551 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T18:37:21.553 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-09T18:37:21.558 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-09T18:37:21.562 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-09T18:37:21.571 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-09T18:37:21.584 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-09T18:37:21.590 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-09T18:37:21.599 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-09T18:37:21.606 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-09T18:37:21.635 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-09T18:37:21.643 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-09T18:37:21.646 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-09T18:37:21.655 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-09T18:37:21.668 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-09T18:37:21.668 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T18:37:21.675 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T18:37:21.769 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-09T18:37:21.785 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-09T18:37:21.798 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:21.799 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T18:37:21.799 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.800 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:21.825 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T18:37:21.840 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-09T18:37:21.845 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-09T18:37:21.847 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-09T18:37:21.849 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T18:37:21.869 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.870 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:21.882 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T18:37:21.886 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-09T18:37:21.888 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-09T18:37:21.891 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-09T18:37:21.893 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-09T18:37:21.896 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-09T18:37:21.900 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-09T18:37:21.904 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-09T18:37:21.950 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-09T18:37:21.962 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-09T18:37:21.964 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-09T18:37:21.970 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-09T18:37:21.972 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-09T18:37:21.975 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-09T18:37:21.978 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-09T18:37:21.997 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:21.997 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T18:37:21.997 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T18:37:21.997 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:21.998 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:22.005 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T18:37:22.007 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-09T18:37:22.009 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-09T18:37:22.012 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-09T18:37:22.014 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-09T18:37:22.016 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-09T18:37:22.018 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-09T18:37:22.021 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-09T18:37:22.029 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-09T18:37:22.033 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-09T18:37:22.035 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-09T18:37:22.037 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-09T18:37:22.040 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-09T18:37:22.045 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-09T18:37:22.048 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-09T18:37:22.052 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T18:37:22.056 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-09T18:37:22.062 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-09T18:37:22.065 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-09T18:37:22.067 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-09T18:37:22.072 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-09T18:37:22.075 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-09T18:37:22.078 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-09T18:37:22.086 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-09T18:37:22.091 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-09T18:37:22.094 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-09T18:37:22.096 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-09T18:37:22.097 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-09T18:37:22.102 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-09T18:37:22.106 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-09T18:37:22.124 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:22.124 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T18:37:22.124 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:22.132 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:22.158 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T18:37:22.158 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T18:37:22.169 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T18:37:22.175 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-09T18:37:22.177 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-09T18:37:22.179 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-09T18:37:22.179 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T18:37:22.791 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:22.801 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-09T18:37:22.817 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:22.818 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:22.844 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:22.857 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-09T18:37:22.860 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-09T18:37:22.863 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-09T18:37:22.865 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-09T18:37:22.865 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T18:37:22.878 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T18:37:22.880 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-09T18:37:22.882 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-09T18:37:22.886 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-09T18:37:22.889 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-09T18:37:22.894 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-09T18:37:22.901 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-09T18:37:22.905 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-09T18:37:22.905 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-09T18:37:22.999 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-09T18:37:23.000 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-09T18:37:23.001 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-09T18:37:23.002 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-09T18:37:23.003 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.083 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T18:37:23.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.085 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:23.277 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 775 k 2026-03-09T18:37:23.278 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:23.280 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:23.280 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:23.281 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:23.281 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:23.297 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:23.297 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:23.392 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:23.433 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:23.603 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T18:37:23.603 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:23.606 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:23.607 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:23.607 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:23.767 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-09T18:37:23.767 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:23.771 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:23.771 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:23.772 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:23.930 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T18:37:23.930 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:23.934 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:23.934 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:23.934 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:24.092 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T18:37:24.092 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:24.095 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:24.096 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:24.096 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:24.254 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-09T18:37:24.254 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:24.257 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:24.258 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:24.258 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:24.422 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T18:37:24.422 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:24.426 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:24.426 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:24.426 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T18:37:24.608 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.6 M 2026-03-09T18:37:24.609 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:24.610 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:24.610 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:24.620 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:24.621 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:24.646 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:24.660 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:24.728 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:24.776 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:24.952 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-volume 2026-03-09T18:37:24.952 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:24.955 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:24.956 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:24.956 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:25.128 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 610 k 2026-03-09T18:37:25.129 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:25.131 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:25.131 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:25.141 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:25.141 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:25.166 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:25.168 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:25.180 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:25.237 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:25.237 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.279 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.467 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:25.468 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.468 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-09T18:37:25.468 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.468 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.7 M 2026-03-09T18:37:25.468 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:25.470 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:25.470 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:25.491 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:25.492 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:25.526 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:25.529 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T18:37:25.530 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T18:37:25.530 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:25.593 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:25.593 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T18:37:25.593 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.632 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:25.810 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-09T18:37:25.810 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:25.814 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:25.814 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:25.814 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:25.990 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Remove 20 Packages 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 79 M 2026-03-09T18:37:25.992 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T18:37:25.996 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T18:37:25.996 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T18:37:26.019 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T18:37:26.019 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T18:37:26.062 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T18:37:26.066 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T18:37:26.068 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T18:37:26.071 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T18:37:26.071 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T18:37:26.084 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T18:37:26.086 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T18:37:26.088 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T18:37:26.090 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T18:37:26.091 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T18:37:26.094 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T18:37:26.094 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:26.107 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:26.108 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T18:37:26.108 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T18:37:26.108 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:26.121 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T18:37:26.124 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T18:37:26.127 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T18:37:26.131 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T18:37:26.134 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T18:37:26.137 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T18:37:26.139 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T18:37:26.141 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T18:37:26.143 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T18:37:26.157 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T18:37:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T18:37:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T18:37:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T18:37:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T18:37:26.221 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T18:37:26.222 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T18:37:26.274 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T18:37:26.274 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:26.274 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:37:26.275 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:26.478 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-09T18:37:26.478 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:26.480 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:26.481 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:26.481 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:26.651 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-09T18:37:26.651 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:26.653 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:26.653 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:26.653 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:26.812 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-09T18:37:26.812 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:26.814 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:26.815 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:26.815 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:26.974 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-09T18:37:26.974 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:26.976 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:26.977 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:26.977 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:27.135 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-09T18:37:27.135 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:27.138 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:27.138 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:27.138 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:27.300 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-09T18:37:27.300 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:27.302 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:27.303 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:27.303 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:27.469 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-09T18:37:27.469 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:27.471 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:27.472 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:27.472 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:27.636 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-09T18:37:27.636 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T18:37:27.638 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T18:37:27.639 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T18:37:27.639 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T18:37:27.661 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T18:37:27.804 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T18:37:27.811 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:27.820 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-09T18:37:27.825 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T18:37:27.837 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:27.837 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:27.844 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T18:37:27.847 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-09T18:37:27.847 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-09T18:37:27.849 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-09T18:37:27.851 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-09T18:37:27.853 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-09T18:37:27.853 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T18:37:27.865 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T18:37:27.867 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-09T18:37:27.869 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-09T18:37:27.872 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-09T18:37:27.874 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-09T18:37:27.879 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-09T18:37:27.886 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-09T18:37:27.890 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-09T18:37:27.890 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-09T18:37:27.989 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-09T18:37:27.990 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-09T18:37:27.991 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-09T18:37:27.992 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-09T18:37:28.002 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-09T18:37:28.002 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-09T18:37:28.022 DEBUG:teuthology.parallel:result is None 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T18:37:28.075 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T18:37:28.076 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.077 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:28.265 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:28.265 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:28.265 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:28.265 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 775 k 2026-03-09T18:37:28.266 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:28.267 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:28.267 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:28.268 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:28.268 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:28.283 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:28.283 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:28.391 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:28.432 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:28.611 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T18:37:28.611 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:28.614 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:28.614 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:28.614 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:28.771 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr 2026-03-09T18:37:28.771 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:28.774 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:28.774 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:28.774 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:28.933 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T18:37:28.933 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:28.936 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:28.936 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:28.936 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:29.091 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T18:37:29.091 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:29.094 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:29.095 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:29.095 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:29.249 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-rook 2026-03-09T18:37:29.249 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:29.252 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:29.252 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:29.252 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:29.405 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T18:37:29.405 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:29.408 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:29.408 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:29.408 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.6 M 2026-03-09T18:37:29.573 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:29.575 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:29.575 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:29.584 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:29.584 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:29.608 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:29.621 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:29.682 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:29.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T18:37:29.723 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:29.723 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:29.723 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:29.723 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:29.723 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:29.890 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-volume 2026-03-09T18:37:29.890 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:29.893 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:29.893 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:29.893 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 610 k 2026-03-09T18:37:30.060 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:30.062 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:30.062 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:30.072 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:30.072 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:30.096 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:30.098 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:30.111 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:30.166 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:30.166 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.209 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout:Remove 3 Packages 2026-03-09T18:37:30.383 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.384 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.7 M 2026-03-09T18:37:30.384 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:30.385 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:30.385 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:30.400 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:30.401 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:30.434 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:30.436 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T18:37:30.437 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T18:37:30.437 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:30.495 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:30.495 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T18:37:30.495 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.538 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:30.705 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: libcephfs-devel 2026-03-09T18:37:30.705 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:30.708 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:30.708 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:30.709 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:30.873 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Remove 20 Packages 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 79 M 2026-03-09T18:37:30.875 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T18:37:30.879 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T18:37:30.879 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T18:37:30.901 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T18:37:30.901 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T18:37:30.940 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T18:37:30.943 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T18:37:30.945 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T18:37:30.948 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T18:37:30.948 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T18:37:30.961 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T18:37:30.962 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T18:37:30.964 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T18:37:30.965 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T18:37:30.967 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T18:37:30.970 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T18:37:30.970 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:30.983 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:30.983 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T18:37:30.983 INFO:teuthology.orchestra.run.vm04.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T18:37:30.983 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:30.996 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T18:37:30.998 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T18:37:31.002 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T18:37:31.005 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T18:37:31.008 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T18:37:31.010 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T18:37:31.012 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T18:37:31.013 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T18:37:31.016 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T18:37:31.029 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T18:37:31.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T18:37:31.091 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T18:37:31.091 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T18:37:31.091 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T18:37:31.130 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:31.321 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: librbd1 2026-03-09T18:37:31.321 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:31.323 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:31.323 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:31.323 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:31.494 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rados 2026-03-09T18:37:31.495 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:31.496 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:31.497 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:31.497 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:31.651 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rgw 2026-03-09T18:37:31.651 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:31.653 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:31.653 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:31.653 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:31.807 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-cephfs 2026-03-09T18:37:31.807 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:31.809 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:31.810 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:31.810 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:31.959 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rbd 2026-03-09T18:37:31.959 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:31.961 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:31.961 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:31.961 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:32.112 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-fuse 2026-03-09T18:37:32.112 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:32.114 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:32.115 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:32.115 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:32.264 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-mirror 2026-03-09T18:37:32.265 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:32.266 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:32.267 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:32.267 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:32.419 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-nbd 2026-03-09T18:37:32.419 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T18:37:32.421 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T18:37:32.422 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T18:37:32.422 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T18:37:32.441 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T18:37:32.560 INFO:teuthology.orchestra.run.vm04.stdout:56 files removed 2026-03-09T18:37:32.579 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T18:37:32.601 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean expire-cache 2026-03-09T18:37:32.749 INFO:teuthology.orchestra.run.vm04.stdout:Cache was expired 2026-03-09T18:37:32.749 INFO:teuthology.orchestra.run.vm04.stdout:0 files removed 2026-03-09T18:37:32.764 DEBUG:teuthology.parallel:result is None 2026-03-09T18:37:32.764 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-09T18:37:32.764 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-09T18:37:32.764 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T18:37:32.764 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T18:37:32.789 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T18:37:32.792 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T18:37:32.854 DEBUG:teuthology.parallel:result is None 2026-03-09T18:37:32.859 DEBUG:teuthology.parallel:result is None 2026-03-09T18:37:32.859 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T18:37:32.861 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T18:37:32.861 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:37:32.895 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:37:32.908 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:^- 172-104-138-148.ip.linod> 3 6 377 32 +1332us[+1329us] +/- 18ms 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:^* static.222.16.42.77.clie> 2 6 377 31 -6747ns[-9832ns] +/- 2488us 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:^- stratum2-1.NTP.TechFak.N> 2 7 16 413 +1104us[+1089us] +/- 18ms 2026-03-09T18:37:32.912 INFO:teuthology.orchestra.run.vm04.stdout:^- cloudrouter.1in1.net 2 6 377 31 +230us[ +227us] +/- 63ms 2026-03-09T18:37:32.916 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:^- cloudrouter.1in1.net 2 6 7 21 +130us[ +130us] +/- 63ms 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:^- 172-104-138-148.ip.linod> 3 6 377 33 +1030us[+1038us] +/- 18ms 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:^* static.222.16.42.77.clie> 2 6 377 32 +35us[ +43us] +/- 2699us 2026-03-09T18:37:32.919 INFO:teuthology.orchestra.run.vm09.stdout:^- stratum2-1.NTP.TechFak.N> 2 6 343 34 +1325us[+1334us] +/- 17ms 2026-03-09T18:37:32.919 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T18:37:32.921 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T18:37:32.921 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T18:37:32.923 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T18:37:32.925 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T18:37:32.927 INFO:teuthology.task.internal:Duration was 939.032774 seconds 2026-03-09T18:37:32.927 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T18:37:32.930 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T18:37:32.930 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:37:32.954 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:37:32.990 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T18:37:33.000 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T18:37:33.245 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T18:37:33.245 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-09T18:37:33.245 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:37:33.307 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-09T18:37:33.308 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:37:33.335 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T18:37:33.335 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:37:33.349 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:37:33.796 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T18:37:33.796 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:37:33.798 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:37:33.820 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:37:33.821 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:37:33.821 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:37:33.821 INFO:teuthology.orchestra.run.vm04.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:37:33.821 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:37:33.823 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:37:33.824 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:37:33.824 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:37:33.824 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:37:33.824 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:37:33.957 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:37:33.958 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:37:33.959 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T18:37:33.962 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T18:37:33.962 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:37:34.024 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:37:34.049 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T18:37:34.051 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:37:34.067 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:37:34.092 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-09T18:37:34.118 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-09T18:37:34.131 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:37:34.161 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:37:34.162 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:37:34.188 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:37:34.188 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T18:37:34.191 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T18:37:34.191 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm04 2026-03-09T18:37:34.191 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:37:34.233 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/604/remote/vm09 2026-03-09T18:37:34.233 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:37:34.264 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T18:37:34.264 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:37:34.275 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:37:34.321 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T18:37:34.323 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T18:37:34.323 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T18:37:34.326 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T18:37:34.326 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:37:34.334 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:37:34.350 INFO:teuthology.orchestra.run.vm04.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 18:37 /home/ubuntu/cephtest 2026-03-09T18:37:34.379 INFO:teuthology.orchestra.run.vm09.stdout: 8532144 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 18:37 /home/ubuntu/cephtest 2026-03-09T18:37:34.380 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T18:37:34.385 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} duration: 939.0327744483948 flavor: default owner: kyr success: true 2026-03-09T18:37:34.385 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:37:34.403 INFO:teuthology.run:pass